# Catching up with the Kharkhanas tragedy

Can’t believe I’m so late to the party. It seems that a year ago, Steven Erikson put the Kharkhanas Trilogy on hold, delaying the publication of the third book. The second book, Fall of Light, came out two years ago and was a difficult read in many ways. More than anything else, it contained way more plots than did the first book, Forge of Darkness, while simultaneously leaving the last book with lots left to explain.

It was like Erikson had lost his way. If he was feeling unsure of himself as a result, I’m glad he’s temporarily shelving the project. It’s not good for readers if books in a series are going to be released with many years in between each instalment but that’s already happened: Forge of Darkness was published in 2012 and Fall of Light, in 2016. Right now, it’s more important for fans like me that Erikson find his mojo and just complete the canon before he dies.

Erikson has also announced (in October 2017) that said mojo quest will take the form of writing the first book in the more-awaited Toblakai (a.k.a. Witness) Trilogy. This is good news because Malazan fans have been more eager to read about the exploits of Karsa Orlong than those of the Tiste races, at least in hindsight and with the hope that the Toblakai story isn’t as frowzy and joyless.

I personally find Karsa to be a dolt and not among my top 50 favourite characters from the series. However, I do find him entertaining and expect the Toblakai Trilogy to be even more so given that the premise is that Karsa is going to rouse the Toblakai in a war against civilisation. Very like the Jaghut story but with less sneering, more cockiness. Hopefully it will prove to be the cure Erikson needs.

Erikson also mentioned that he had been demotivated by the fact that Fall of Light‘s sales were lower than that of Forge of Darkness. Though he initially attributed this to readers waiting for Erikson to finish writing the series so they could read it one go, he found he couldn’t explain the success of Ian Esslemont’s Dancer’s Lament with the same logic: Lament is the first book in the unfinished Path of Ascendancy series. He concluded readers were simply being fatigued by reading Fall of Light. I wouldn’t blame them: it was even more difficult to read than the midsection of Deadhouse Gates.

I’m also starting to dislike his tendency to include overly garrulous characters whose loquaciousness the author seems to want to use to voice his every thought. After a point (which is quickly reached), it just feels like Erikson is bragging. The Malazan series had the intolerable gas-bags Kruppe and Iskaral Pust. Fall of Light was only made worse by Prazek and Dathenar and their completely unnecessary chapter-long soliloquies; at least Kruppe and Pust did things.

This is another thing I’m wary of in the Toblakai Trilogy, although I doubt my prayers will be answered, because you could see Erikson had fun with Karsa in the Malazan series. In fact, more broadly speaking, I’m wary of any new Erikson epic fantasy book because though I know the world and the stories are going to be fantastic, his writing is tiring and his storytelling is more flawed than it otherwise tends to be when he feels compelled to expose, or soliloquise, rather than narrate.

Actually, forget wary – I’ve almost given up on it. Shortly before the release of Forge of Darkness, Erikson had written for Tor that he’s going to keep the trilogy more traditional and make it less of a critique of the epic fantasy subgenre than he did with the Malazan series. Look what it turned out to be. And I only say I’ve almost given up because I hope Erikson attributes Fall of Light‘s tragedy to a different mistake, but then why should he? I found the fencing metaphor from his Tor piece to be instructive in this regard:

As a long-time fencer I occasionally fight a bout against a beginner. They are all enthusiasm, and often wield their foil like a whip, or a broadsword. Very hard to spar with. Enthusiasm without subtlety is often a painful encounter for yours truly, and I have constant ache in hands from fractured fingers and the like, all injured by a wailing foil or epee. A few of those injuries go back to my own beginning days, when I did plenty of my own flailing about. Believe it or not, that wild style can be effective against an old veteran like me. It’s hard to stay subtle with your weapon’s point when facing an armed Dervish seeking to chop down a tree. The Malazan series wailed and whirled on occasion. But those three million words are behind me now. And hopefully, when looking at my fans, they are more than willing to engage in a more subtle duel, a game of finer points. If not, well, I’m screwed.

On the other hand, I’ve really enjoyed Esslemont’s writing, which thankfully has only improved since Night of Knives. I hope Dancer’s Lament continues this trend. I purchased it this morning and hope I can complete it and the next book, as well as a reread of some of Esslemont’s other books, by the time Erikson’s The God is Not Willing is published.

# Climate fear

The Intergovernmental Panel on Climate Change recently published a report exhorting countries committed to the Paris Agreement to limit global warming to an additional 1.5º by the end of this century. As if this isn’t drastic enough, one study has also shown that if we’re not on track to this target in the next 12 years, then we’re likely to cross a point of no return and be unable to keep Earth’s surface from warming by 1.5º C.

In the last decade, the conversation on climate change passed by an important milestone – that of journalists classifying climate denialism as false balance. After such acknowledgment, editors and reporters simply wouldn’t bother speaking to those denying the anthropogenic component of global warming in pursuit of a balanced copy because denying climate change became wrongful. Including such voices wouldn’t add balance but in fact remove it from a climate-centred story.

But with the world inexorably thundering towards warming Earth’s surface by at least 1.5º C, if not more, and with such warming also expected to have drastic consequences for civilisation as we know it, I wonder when optimism will also become pulled under the false balance umbrella. (I have no doubt that it will so I’m omitting the ‘if’ question here.)

There were a few articles earlier this year, especially in the American media, about whether or not we ought to use the language of fear to spur climate action from people and governments alike. David Biello had excerpted the following line from a new book on the language of climate change in a review for the NYT: “I believe that language can lessen the distance between humans and the world of which we are a part; I believe that it can foster interspecies intimacy and, as a result, care.” But what tone should such language adopt?

A September 2017 study noted:

… the modest research evidence that exists with respect to the use of fear appeals in communicating climate change does not offer adequate empirical evidence – either for or against the efficacy of fear appeals in this context – nor would such evidence adequately address the issue of the appropriateness of fear appeals in climate change communication. … It is also noteworthy that the language of climate change communication is typically that of “communication and engagement,” with little explicit reference to targeted social influence or behaviour change, although this is clearly implied. Hence underlying and intertwined issues here are those of cogent arguments versus largely absent evidence, and effectiveness as distinct from appropriateness. These matters are enmeshed within the broader contours of the contested political, social, and environmental, issues status of climate change, which jostle for attention in a 24/7 media landscape of disturbing and frightening communications concerning the reality, nature, progression, and implications of global climate change.

An older study, from 2009, had it that using the language of fear wouldn’t work because, according to Big Think‘s break down, could desensitise the audience, prompt the audience to trust the messenger less over time and trigger either self-denial or some level of nihilism because what else would you do if you’re “confronted with messages that present risks” that you, individually, can do nothing to mitigate. Most of all, it could distort our (widely) shared vision of a “just world”.

On the other hand, just the necessary immediacy of action suggests we should be afraid lest we become complacent. We need urgent and significant action in both the short- and long-terms and across a variety of enterprises. Fear also sells. it’s always in demand irrespective of whether a journalist is selling it, or a businessman or politician. It’s easy, sensational, grabs eyeballs and can be effortlessly communicated. That’s how you have the distasteful maxim “If it bleeds, it leads”.

In light of these concerns, it’s odd that so many news outlets around the world (including The Guardian and The Washington Post) are choosing to advertise the ’12-year-deadline to act’ bit (even Forbes’s takedown piece included this info. in the headline). A deadline is only going to make people more anxious and less able to act. Further, it’s odder that given the vicious complexities associated with making climate-related estimates, we’re even able to pinpoint a single point of no return instead of identifying a time-range at some point within which we become doomed. And third, I would even go so far as to question the ‘doomedness’ itself because I don’t know if it takes inflections – points after which we lose our ability to make predictions – into account.

Nonetheless, as we get closer to 2030 – the year that hosts the point of no return – and assuming we haven’t done much to keep Earth’s surface warming by 1.5º C by the century’s close, we’re going to be in neck-deep in it. At this point, would it still be fair for journalists, if not anyone else, to remain optimistic and communicate using the language of optimism? Second, will optimism on our part be taken seriously considering, at that point, the world will find out that Earth’s surface is going to warm by 1.5º C irrespective of everyone else’s hopes.

Third: how will we know if optimistic engagement with our audience is even working? Being able to measure this change, and doing so, is important if we are to reform journalism to the extent that newsrooms have a financial incentive to move away from fear-mongering and towards more empathetic, solution-oriented narratives. A major reason “If it bleeds, it leads” is true is because it makes money; if it didn’t, it would be useless. By measuring change, calculating their first-order derivatives and strategising to magnify desirable trends in the latter, newsrooms can also take a step back from the temptations of populism and its climate-unjust tendencies.

Climate change journalism is inherently political and as susceptible to being caught between political faultlines as anything else. This is unlikely to change until the visible effects of anthropogenic global warming are abundant and affecting day-to-day living (of the upper caste/upper class in India and of the first world overall). So between now and then, a lot rests on journalism’s shoulders; journalists as such are uniquely situated in this context because, more than anyone else, we influence people on a day-to-day basis.

Apropos the first two questions: After 2030, I suspect many people will simply raise the bar, hoping that some action can be taken in the next seven decades to keep warming below 2º C instead of 1.5º C. Journalists will make up both the first and last lines of defence in keeping humanity at large from thinking that it has another shot at saving itself. This will be tricky: to inspire optimism and prompt people to act even while constantly reminding readers that we’ve fucked up like never before. I’d start by celebrating the melancholic joy – perhaps as in Walt Whitman’s Leaves of Grass (1891) – of lesser condemnations.

To this end, journalists should also be regularly retrained – say, once every five years – on where climate science currently stands, what audiences in different markets feel about it and why, and what kind of language reporters and editors can use to engage with them. If optimism is to remain effective further into the 21st century, collective action is necessary on the part of journalists around the world as well – just the way, for example, we recognise certain ways to report stories of sexual assault, data breaches, etc.

# What the Nobel Prizes are not

The winners of this year’s Nobel Prizes are being announced this week. The prizes are an opportunity to discover new areas of research, and developments there that scientists consider particularly notable. In this endeavour, it is equally necessary to remember what the Nobel Prizes are not.

For starters, the Nobel Prizes are not lenses through which to view all scientific pursuit. It is important for everyone – scientists and non-scientists alike – to not take the Nobel Prizes too seriously.

The prizes have been awarded to white men from Europe and the US most of the time, across the medicine, physics and chemistry categories. This presents a lopsided view of how scientific research has been undertaken in the world. Many governments take pride in the fact that one of their citizens has been awarded this prize, and often advertise the strength of their research community by boasting of the number of Nobel laureates in their ranks. This way, the prizes have become a marker of eminence.

However, this should not blind us from the fact that there are equally brilliant scientists from other parts of the world that have done, and are doing, great work. Even research institutions do this; for example, this is what the Institute for Advanced Study at Princeton University, New Jersey, says on its website:

The Institute’s mission and culture have produced an exceptional record of achievement. Among its Faculty and Members are 33 Nobel Laureates, 42 of the 60 Fields Medalists, and 17 of the 19 Abel Prize Laureates, as well as many MacArthur Fellows and Wolf Prize winners.

What the prizes are

Winning a Nobel Prize may be a good thing. But not winning a Nobel Prize is not a bad thing. That is the perspective often lost in conversations about the quality of scientific research. When the Government of India expresses a desire to have an Indian scientist win a Nobel Prize in the next decade, it is a passive admission that it does not consider any other marker of quality to be worth the endorsement. Otherwise, there are numerous ways to make the statement that the quality of Indian research is at par with the rest of the world’s (if not better in some areas).

In this sense, what the Nobel Prizes afford is an easy way out. Consider the following analogy: when scientists are being considered for promotions, evaluators frequently ask whether a scientist in question has published in “prestigious” journals like Nature, Science, Cell, etc. If the scientist has, it is immediately assumed that the scientist is undertaking good research. Notwithstanding the fact that supposedly “prestigious” journals frequently publish bad science, this process of evaluation is unfair to scientists who publish in other peer-reviewed journals and who are doing equally good, if not better, work. Just the way we need to pay less attention to which journals scientists are publishing in and instead start evaluating their research directly, we also need to pay less attention to who is winning Nobel Prizes and instead assess scientists’ work, as well as the communities to which the scientists belong, directly.

Obviously this method of evaluation is more arduous and cumbersome – but it is also the fairer way to do it. Now the question arises: is it more important to be fair or to be quick? On-time assessments and rewards are important, particularly in a country where resource optimisation carries greater benefits as well as where the population of young scientists is higher than in most countries; justice delayed is justice denied, after all. At the same time, instead of settling for one or the other way, why not ask for both methods at once: to be fair and to be quick at the same time? Again, this is a more difficult way of evaluating research than the methods we currently employ, but in the longer run, it will serve all scientists as well as science better in all parts of the world.

Skewed representation of ‘achievers’

Speaking of global representation: this is another area where the Nobel Foundation has faltered. It has ensured that the Nobel Prizes have accrued immense prestige but it has not simultaneously ensured that the scientists that it deems fit to adorn that prestige have been selected equally from all parts of the world. Apart from favouring white scientists from the US and Europe, the Nobel Prizes have also ignored the contributions of women scientists. Thus far, only two women have won the physics prize (out of 206), four women the chemistry prize (out of 177) and 12 women the medicine prize (out of 214).

One defence that is often advanced to explain this bias is that the Nobel Prizes typically reward scientific and technological achievements that have passed the test of time, achievements that have been repeatedly validated and whose usefulness for the common people has been demonstrated. As a result, the prizes can be understood to be awarded to research done in the past – and in this past, women have not made up a significant portion of the scientific workforce. Perhaps more women will be awarded going ahead.

This arguments holds water but only in a very leaky bucket. Many women have been passed over for the Nobel Prizes when they should not have been, and the Nobel Committee, which finalises each year’s laureates, is in no position to explain why. (Famous omissions include Rosalind Franklin, Vera Rubin and Jocelyn Bell Burnell.) This defence becomes even more meaningless when you ask why so few people from other parts of the world have been awarded the Nobel Prize. This is because the Nobel Prizes are a fundamentally western – even Eurocentric – institution in two important ways.

First, they predominantly acknowledge and recognise scientific and technological developments that the prize-pickers are familiar with, and the prize-pickers are a group made up of all previous laureates and a committee of Swedish scientists. Additionally, this group is only going to acknowledge research that is already familiar with and by people its own members have heard of. It is not a democratic organisation. This particular phenomenon has already been documented in the editorial boards of scientific journals, with the effect that scientific research undertaken with local needs in mind often finds dismal representation in scientific journals.

Second, according to the foundation that awards them, the Nobel Prizes are designated for individuals or groups who work has granted the “greatest benefit on mankind”. For the sciences, how do you determine such work? In fact, one step further, how do we evaluate the legitimacy and reliability of scientific work at all? Answer: we check whether the work has followed certain rules, passed certain checks, received the approval of the author’s peers, etc. All of these are encompassed in the modern scientific publishing process: a scientists describes the work they have done in a paper, submits the paper to a journal, the journal gets the paper reviewed up the scientist’s peers, once it is okay the paper is published. It is only when a paper is published that most people consider the research described in it to be worth their attention. And the Nobel Prizes – rather the people who award them – implicitly trust the modern scientific publishing process even though the foundation itself is not obligated to, essentially as a matter of convenience.

However, what about the knowledge that is not published in such papers? More yet, what about the knowledge that is not published in the few journals that get a disproportionate amount of attention (a.k.a. the “prestige” titles like Nature, Science and Cell). Obviously there are a lot of quacks and cracks whose ideas are filtered out in this process but what about scientists conducting research in resource-poor economies who simply can’t afford the fancy journals?

What about scientists and other academics who are improving previously published research to be more sensitive to the local conditions in which it is applied? What about those specialists who are unearthing new knowledge that could be robust but which is not being considered as such simply because they are not scientists – such as farmers? It is very difficult for these people to be exposed to scholars in other parts of the world and for the knowledge they have helped create/produce to be discovered by other people. The opportunity for such interactions is diminished further when the research conducted is not in English.

In effect, the Nobel Prizes highlight people and research from one small subset of the world. There are a lot of people, a lot of regions, a lot of languages and a lot of expertise excluded from this subset. As the prizes are announced one by one, we need to bear these limitations in mind and choose our words carefully, so as to not exalt the prizewinners too much and downplay the contributions of numerous others in the same field as well as in other fields and, more importantly, we must not assume that the Nobel Prizes are any kind of crowning achievement.

The Wire
October 1, 2018

# Proposed solution for Riemann hypothesis?

The hot news this week from the mathematical physics world is that the noted mathematician Michael Atiyah claimed to have solved the Riemann hypothesis, one of the most difficult unsolved problems known and whose resolution carries a $1 million prize. The problem is that Atiyah’s solution, while remarkable for its brevity, may not hold water. The Riemann hypothesis is concerned with the Riemann zeta function, which – in very broad terms – provides a way to predict the position of prime numbers on the number line. Computers have been able to find prime numbers with scores of digits and mathematicians have been able to find in hindsight that, yes, the zeta function predicts they exist. However, what mathematicians don’t know (and this is the Riemann hypothesis) is whether the function can predict prime numbers ad infinitum or if it will break at some particularly large value. And solving the Riemann hypothesis problem means proving that the zeta function can indeed predict the position of all prime numbers on the number line. A more technical explanation, reproduced from my article in The Wire last year, follows; article continues below this section: In 1859, Bernhard Riemann expanded on Euler’s work to develop a mathematical function that relates the behaviour of positive integers, prime numbers and imaginary numbers. The Riemann hypothesis is founded on a function called the Riemann zeta function. Before him, Euler had formulated a mathematical series called Z (s), such that: Z (s) = (1/1s) + (1/2s) + (1/3s) + (1/4s) + … He found that Z (2) – i.e., substituting 2 for s in the Z function – equalled π2/6, and Z (4) equalled π4/90. At the same time, for many other values of s, the series Z (s) would not converge at a finite value: the value of each term would keep building to larger and larger numbers, unto infinity. This was particularly true for all values of s less than or equal to 1. Euler was also able to find a prime number connection. Though the denominators together constituted the series of positive integers, with a small tweak, Z (s) could be expressed using prime numbers alone as well: Z (s) = [1/(1 – 1/2s)] * [1/(1 – 1/3s)] * [1/(1 – 1/5s)] * [1/(1 – 1/7s)] * … This was Euler’s last contribution to the topic. In the late 1850s, Riemann picked up where Euler left off. And he was bothered by the behaviour of the series of additions in Z (s) when the value of s dropped below 1. In an attempt to make it less awkward (nobody likes infinities), he tried to modify it such that Z (2) and Z (4), etc., would still converge to interesting values like π2/6 and π4/90, etc. – but while Z (s ≤ 1) wouldn’t run away towards infinity. He succeeded in finding such a function but it was far more complex than Z (s). This function is called the Riemann zeta (ζ) function: ζ (s). And it has some weird properties of its own. One such is involved in the Riemann hypothesis. Riemann found that ζ (s) would equal zero whenever s was a negative even number (-2, -4, -6, etc.). These values are also called trivial zeroes. He wanted to know which other values of s would precipitate a ζ (s) equalling zero – i.e. the non-trivial zeroes. And he did find some values. They all had something in common because they looked like this: (1/2) + 14.134725142i, (1/2) + 21.022039639i, (1/2) + 25.010857580i, etc. (i is the imaginary number represented as the square-root of -1.) Obviously, Riemann was prompted to ask another question – the question that has since been found to be extremely difficult to answer, a question worth$1 million. He asked: Do all values of s that are non-negative integers and for which ζ (s) = 0 take the form ‘(1/2) + a real number multiplied by i‘?

In more mathematical terms: “The Riemann hypothesis states that the nontrivial zeros of ζ (slie on the line Re (s1/2.”

When I first heard Atiyah’s claim, I was at a loss for how to react. Most claimed solutions for the Riemann hypothesis are usually dismissed quickly because they contain leaps of logic not backed by sufficient mathematical rigour. On the other hand, Atiyah isn’t just anybody. He won the Fields Medal in 1966 and the Abel Prize in 2004, and has been associated with some famous solutions for problems in algebraic topology.

Perhaps the most famous and recent example of this was Vinay Deolalikar’s proof of another major unsolved problem in mathematics, whether P equals NP, in August 2010. The P/NP problem asks whether a problem whose solution is easy to check is also therefore easy to solve. Though nobody has been able to provide a proof for this conundrum yet, it is widely assumed by mathematicians and computer scientists that P = NP, i.e. a problem whose solution is easy to check is therefore also easy to solve. However, Deolalikar, then working at Hewlett Packard Research Labs, claimed to have a proof that P ≠ NP, and it couldn’t be readily dismissed either because, to borrow Scott Aaronson’s words,

What’s obvious from even a superficial reading is that Deolalikar’s manuscript is well-written, and that it discusses the history, background, and difficulties of the P vs. NP question in a competent way. More importantly (and in contrast to 98% of claimed P≠NP proofs), even if this attempt fails, it seems to introduce some thought-provoking new ideas, particularly a connection between statistical physics and the first-order logic characterization of NP.

Nonetheless, flaws were found in Deolalikar’s proof, as delineated prominently in Aaronson’s and R.J. Lipton’s blogs, and the claim was settled: P/NP remained (and remains) unsolved. Lesson: watch the blogs as a first response measure. The peers of a paper’s author(s) usually know what’s happening before the news does and, if a controversial claim has been advanced, they’re likely already further into a debate than the mainstream media realises.

So as a quick way out in Atiyah’s case, I hopped over to Shtetl Optimized, Aaronson’s blog. And there, at the end of a long post about the weirdness of quantum theory, was this line: “As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.” Aha!

Some of you will remember that three physicists made a major announcement last year about finding a potential way to solve the Riemann hypothesis because they had unearthed an eerie similarity between the Riemann zeta function, central to the hypothesis, and an equation found in quantum mechanics. While they’re yet to post an update, the physicists’ thesis was compelling and wasn’t dismissed by the wider mathematical community, raising hope that it could lead to a solution.

Atiyah’s solution also concerns itself with a famously physical concept: the fine-structure constant, denoted as α (alpha). The value of this constant determines the strength with which charged particles like electrons interact with the electromagnetic field. It has the value of about 1/137. If it were higher, the electromagnetic force would be stronger and all atoms would be smaller, apart from numerous other cascading effects. Atiyah’s resolution of the Riemann hypothesis is pegged to a new derivation for the value of α, and this where he runs into trouble.

Sean Carroll, a theoretical physicist Caltech, called the derivation “misguided”.  Madhusudhan Raman, a postdoc at the Tata Institute of Fundamental Research, said that while he isn’t qualified to comment on the correctness on the Riemann hypothesis proof, he – like Carroll – had some problems with the physics of it.

His full explanation is as follows (paraphrased): It is tempting to think of α as a fixed number, like π (pi), but it is not. While the value of π does not change, the value of α does because it is related to the energy at which it is being measured. At higher energies, such as inside the Large Hadron Collider, the value of α will be higher. So α is not a number as much as a function that says its value is X at energy Y. However, Atiyah appears to have worked with the assumption that α is a single, fixed number like π. This isn’t true and therefore his derivation is suspect.

Sabine Hossenfelder, a research fellow at the Frankfurt Institute for Advanced Studies, also had the same issues with Atiyah’s effort. Carroll went a step further and said that if he had to be very charitable, then the derivation could pass muster but not without also discussing various issues in physics associated with α. However, he wrote, “Not a whit of this appears in Atiyah’s paper.”

At the same time – and unlike in numerous previous instances – these physicists and others besides continue to have great respect for Atiyah and his work, and why not? Though he is 89, as one comment observed on Carroll’s blog, “It’s brave to fight to the last, and, who knows, with his distinguished record and doubtless vast erudition, maybe there’s some truth or useful insights in these latest papers, even if [it’s] not quite what he claims.”

So also, the Riemann hypothesis endures unresolved.

The Wire
September 28, 2018

# An epistocracy

The All India Council for Technical Education (AICTE) has proposed a new textbook that will discuss the ‘Indian knowledge system’ via a number of pseudoscientific claims about the supposed inventions and discoveries of ancient India, The Print reported on September 26. The Ministry of Human Resource Development (MHRD) signed off on the move, and the textbook – drawn up by the Bharatiya Vidya Bhavan educational trust – is set to be introduced in 80% of the institutions the AICTE oversees.

According to the Bharatiya Vidya Bhavan website, “the courses of study” to be introduced via the textbook “were started by the Bhavan’s Centre for Study and Research in Indology under the Delhi Kendra after entering into an agreement with the AICTE”. They include “basic structure of Indian knowledge system; modern science and Indian knowledge system; yoga and holistic health care”, followed by “essence of Indian knowledge tradition covering philosophical tradition; Indian linguistic tradition; Indian artistic tradition and case studies”.

In all, the textbook will be available to undergraduate students of engineering in institutions other than the IITs and the NITs but still covering – according to the Bhavan – “over 3,000 engineering colleges in the country”.

Although it is hard to fathom what is going on here, it is clear that the government is not allowing itself to be guided by reason. Otherwise, who would introduce a textbook that would render our graduates even more unemployable, or under-employed, than they already are? There is also a telling statement from an unnamed scholar at the Bhavan who was involved in drafting the textbook; as told to The Print: “For ages now, we have been learning how the British invented things because they ruled us for hundreds of years and wanted us to learn what they felt like. It is now high time to change those things and we hope to do that with this course”.

The words “what they felt like” indicate that the people who have enabled the drafting and introduction of this book, including elected members of Parliament, harbour a sense of disenfranchisement and now feel entitled to their due: an India made great again under the light of its ancient knowledge, as if the last 2,000 years did not happen. It also does not matter whether the facts as embodied in that knowledge can be considered at par with the methods of modern science. What matters is that the Government of India has today created an opportunity for those who were disempowered by non-Hindu forces to flourish and that they must seize it. And they have.

In other words, this is a battle for power. It is important for those trying to fight against the introduction of this textbook or whatever else to see it as such because, for example, MHRD minister Prakash Javadekar is not waiting to be told that drinking cow urine to cure cancer is pseudoscientific. It is not a communication gap; Javadekar in all likelihood is not going to drink it himself (even though he is involved in creating a platform to tell the masses that they should).

Instead, the stakeholders of this textbook are attempting to fortify a power structure that prizes the exclusion of knowledge. Knowledge is power, after all – but an epistocracy cannot replace a democracy; “ignorance doesn’t oppress in the same way that knowledge does,” to adapt the words of David Runciman. For example, the textbook repeatedly references an older text called the ‘Yantra Sarvasva’ and endeavours to establish it as a singular source of certain “facts”. And who can read this text? The upper castes.

In turn, by awarding funds and space for research to those who claim to be disseminating ancient super-awesome knowledge and shielding them from public scrutiny, the Narendra Modi government is subjecting science to power. A person who peddles a “fact” that Indians flew airplanes fuelled by donkey urine 4,000 years ago no longer need aspire to scholarly credentials; he only has to want to belong to a socio-religious grouping that wields power.

A textbook that claims India invented batteries millennia before someone in Europe did is a weapon in this movement but does not embody the movement itself. Attempts to make this textbook go away will not make future textbooks go away, and attempts to counter the government’s messaging using the language of science alone will not suffice. For example, good education is key, and our teachers, researchers, educationists and civil society are a crucial part of the resistance. But even as they complain about rising levels of mediocrity and inefficiency, perpetrated by ceaseless administrative meddling, the government does not seek to solve the problem as much as use it as an excuse to perpetrate further mediocrity and discrimination.

There was no greater proof of this than when a member of the National Steering Committee constituted by the Department of Science and Technology to “validate research on panchgavyatold The Wire in 2017, “With all-round incompetence [of the Indian scientific community], this is only to be expected. … If you had 10-12 interesting and well-thought-out good national-level R&D programmes on the table, [the ‘cowpathy’] efforts will be seen to be marginal and on the fringe. But with nothing on the table, this gains prominence from the government, which will be pushing such an agenda.”

But we do have well-thought-out national-level R&D programmes. If they are not being picked by the government, it must be forced to provide an explanation as to why, and justify all of its decisions, instead of letting it bask in the privilege of our cynicism and use the excuse of our silence to sustain its incompetence. Bharatiya Vidya Bhavan’s textbook exists in the wider political economy of banning beef, lynching Dalits, inciting riots, silencing the media and subverting the law, and not in an isolated silo labeled ‘Science vs. Pseudoscience’. It is a call to action for academics and everyone else to protest the MHRD’s decision and – without stopping there – for everyone and the academics to vocally oppose all other moves by public institutions and officials to curtail our liberties.

It is also important for us to acknowledge this because we will have to redraft the terms of our victory accordingly. To extend the metaphor of a weapon: the battle can be won by taking away the opponent’s guns, but the war will be won only when the opponent finds its cause to be hopeless. We must fight the battles but we must also end the war.

The Wire
September 27, 2018

# Storm-seeker

For the last two nights, the skies of Bangalore have been opening up, as if for me. Last night, it poured rivers. The sky flashed with the kind of lightning that makes you say you’ve never seen lightning like that. The entire empyrean turns that electric pink that you know is all heat, blowing like canons through columns of air at the speed of sound. Seconds later, you hear it building into a crescendo into the sound of a mountain coming apart – and it pours, pours, pours, pours.

The petrichor is thick in the air, clogging your senses. Its name translates in the Greek to, roughly, “the fluid in the vein of the gods in the rocks”. Its odour is due to the presence of an alcohol, geosmin, in the soil, released by actinobacteria. We pick up on petrichor the moment it is in play because we have evolved to; we know it is going to rain when there a few parts per trillion of geosmin in the air. A biologist will tell you it is to help you find water wherever you are. I don’t think so. I think it is to help us find the storm wherever it is. We’re storm-seekers. And why not? I stand upon this crag looking at the world up on fire, the world below underwater and I, in between heaven and hell.

It is where I have always been. Satyavrata cursed, Trishanku liberated.

# Political activation

… all forms of knowledge are implicated in political structures in one way or another. If the people who actually have expertise in that form of knowledge are not the ones activating it politically, then someone else is going to do it for them.

– Curtis Dozier, publisher of Pharos. Source of quote here.

Scientists communicating their work to the people is a way for them to take control of the narrative such that they can guide it the way they want it to go, they way they think it should go. But this is a small component of the larger idea of science stewardship. Without stewards – who can chaperone scientific knowledge through corridors of power as much as they can through the many streams of public dialogue – science, even if just the label, is going to be appropriated by “someone else” to be activated politically unto their ends. When the “someone else” is also bound to an enthno-nationalistic ideology, science is doomed.

# Board games II

My second visit to Tabletop Thursday on September 20 was super-fun again. This time I played four games: Colouretto, The Lady and the Tiger, Coup and Secret Hitler. I’m pretty sure one of the people I played the last game with, who was introduced only as Amit, was Amit Varma, the author of India Uncut, the blog that got me blogging. I didn’t get a chance to talk to him, hopefully next time!

# I don’t want your ideas

Tommaso Dorigo published a blog post on the Science 2.0 platform, where he’s been publishing his writing, that I would have liked to read. It was about whether neural networks could help design particle detectors on accelerators of the future. This is an intriguing idea considering neural networks have been pressed into improving diagnostic and problem-solving tasks in various other fields in an effort to leapfrog over barriers to the field’s expansion. And particle physics is direly in need of such efforts given the increasing gap between theoretical and experimental results.

However, I couldn’t concentrate on Dorigo’s piece because the moment I realised that he was the author (having discovered the piece through its headline), my mind was befouled by the impression I have of him as a person – which is poor. This was the result of an interaction he had had on Twitter with astrophysicist Katherine Mack last year, in which he came across – from my POV – as an insensitive and small-minded person. I had written shortly after on the basis of this interaction that as much as we need more scientific insights, they or their brilliance should not excuse troubling behaviour on the scientist’s part.

In other words, no matter how brilliant the scientist, if he is going to joke about matters no one should be joking about and simply being juvenile in his conduct, then he should not be accommodated in academia – or in public discourse – without sufficient precautions that will prevent him from damaging the morale of his non-male colleagues and peers. I am aware that there is no way Dorigo’s unwholesome ideas can affect my life but at the same time I don’t want to consume what he publishes and so contribute to the demand for his writing (even passively). This isn’t a permanent write-off: Dorigo is yet to apologise for his words (that I know of); silent repentance is not useful for those who witnessed that very public exchange with Mack.

However, at the end of all this, there is no way for me to remove the idea of neural networks designing particle detectors from my consciousness. Plus given that ideas in science have to be attributed to those who originated them, this means I can’t explore Dorigo’s idea without reading more of Dorigo’s writing.

At this point, I am tempted to ask that publishers, distributors, aggregators and platforms – all entities that share and distribute content on various platforms and through different services – ensure that the name of the author is present and accessible in the platform/service-specific metadata. This is because more and more people are starting to have discussions about whether genius should excuse, say, misogyny and concluding that it shouldn’t. People are also becoming more conscious of whose writing they are consuming and whose they are actively avoiding for various reasons. These decisions matter, and content distributors need to assist them actively.

For example, I came upon Dorigo’s article via a Google News Alert for ‘high-energy physics’. The corresponding email alert looked like this:

The headline, publisher’s name and the first score or so words of the article are visible in the article preview provided by Google. In the first item: the fact that it is also a press release is mentioned, but I am not sure if this is a regular feature. Although it is not immediately evident if the publisher is who it says it is, Google does not mask the URL if you hover over the link, there is only a forwarding prefix (google.com/url?rct=j&sa=t&url=<link>).

I have essentially framed my argument as a contest between discovering new ideas and avoiding others. For example, by choosing to avoid Dorigo’s writing, I am also choosing to avoid discovering the arguably unique ideas that Dorigo might have – and in the long-run give up on all that knowledge. However, this is an insular counterargument because there is a lot to be learnt out there. There is no reason I should have to put up with someone like Dorigo. Should a subsequent question arise as to whether we should tolerate someone who is doing something unique while also being misogynistic, etc.: the answer is still ‘no’ because it remains that nothing should excuse bad behaviour of that kind.

# ‘Gardens of the Moon’

I – and all my friends who have read the Malazan Book of the Fallen series – have wondered why the first book in the series is titled Gardens of the Moon. The only Moon-related entity in the book is Moon’s Spawn, the flying fortress of Anomander Rake’s Tiste Andii, but it doesn’t possess any gardens. In fact, the only garden that finds prominent mention in the book is the one on which a festival named Gedderone’s Fete takes place. So the title has always been confusing.

Yesterday, in the middle of my third reread of the series, I came across a curious statement in Dust of Dreams, the ninth book: that Olar Ethil, the bonecaster of the Logros T’lan Imass, is called ‘Ayala Alalle’ by the Forkrul Assail. ‘Ayala Alalle’ means ‘tender of the Gardens of the Moon’. Now, Olar Ethil is a particularly interesting character in the series: she may be the mother of Draconus’s daughters Envy and Spite, was an Azathanai who may have created the Imass, and she may be Burn the Sleeping Goddess (keeping with author Steven Erikson’s persistent use of an unreliable narrator throughout the series). She was certainly the bonecaster who conducted the First Ritual of Tellann.

Olar Ethil, a.k.a. Ayala Alalle, as Burn is what is relevant here. The Malazan world is thought to be kept in existence by the dreaming of Burn. Should her dreams be poisoned, the Malazan world will be poisoned; should she awaken from her dream, the Malazan world will be destroyed. Now, if the person who was Ayala Alalle was also the person known as Burn, then ‘tending to the Gardens of the Moon’ may have been a reference to Burn’s tending to her dream or the subjects of her dream – i.e. in effect serving as a broad introduction to the world and peoples of the books.

I know this is tenuous and based on Olar Ethil being Burn and that is something Erikson never confirms, not even in the first two books of the Kharkhanas Trilogy (the third is yet to be published), which discuss the Azathanai before K’rul created the Warrens. However, I’m going to go with it because Erikson does not provide any other material in Gardens of the Moon that might suggest why it is named so. All the other books in the series are also very specifically named according to people or events in each book.

Finally, I am going to take heart from the fact that we find out only in the series’s last book, The Crippled God, as to why the series is called so. It is just another example of Erikson being perfectly okay with explaining things as and when he pleases and not when he thinks the reader ought to know.