Cat stripes and folk tales

The New York Times published an article on September 7, 2021, entitled ‘How the Cat Gets Its Stripes: It’s Genetic, Not a Folk Tale’. The article, written by James Gorman, explains how a team of scientists found that a simple genetic mechanism, involving a protein that affects embryonic tissue and a gene that inhibits the protein, could produce a variety of patterns on cats’ coats.

Gorman begins his article by invoking the fact that “folklore is full of stories about the coat patterns of cats: How the tiger got its stripes. How the leopard got its spots. And scientists ask the same questions…”. Gorman subsequently segues into the broader questions of morphogenesis, the process by which different body parts develop their shapes, and doesn’t touch on folklore again for the rest of the article.

Why then does the headline pay so much attention folk tales? (By virtue of being included in the headline, the newspaper indicates that the reader is to expect that that is a big part of the story. Ideas or concepts that a reporter includes in throwaway fashion typically don’t find mention in headlines.) I found the headline to be jarring because it evoked, deliberately or otherwise, a habit among hardcore rationalists and anti-superstition activists to dismiss folktales and mythologies that underlie superstitious beliefs as not just unreal but also invalid.

Such dismissal often takes the form of cosmetic ridicule: wait for the next solar eclipse, tune into any Tamil news channel – or any local-language news channel – and look out for members of a local ‘rationalists’ group sitting outside and having a meal, to counter a common belief that eating during a solar eclipse is harmful. I find, based on how my extended family and their friends react to such images, that the ‘rationalists’ simply wish to prove that ‘they’re right and those who maintain such beliefs are wrong’ – or at least that’s how they’re perceived.

(Note: members of my extended family are entirely upper-caste, upper-class and have at least an undergraduate degree. I’m extrapolating their reactions to everyone else.)

This attitude would miss an important point that stems from the answer to the following question: why is beating back superstitious beliefs considered socially and morally desirable? Because it discourages the believers from accessing educational and health-related resources, which in turn harms society as a whole, and because, given the way democratic power has been structured by society and the state, superstition can disempower the believers.

However, this requires us to separate stories that we believe to be real from stories that we believe as stories, and not as history. And, to me at least, this distinction is useful because, unlike ‘rationalists’, we’re not trying to erase the stories (simply because the scientific method can’t validate them); we’re trying to say, ‘They’re important stories in your culture, but they didn’t actually happen’ – or even ‘They’re important stories in your culture, but science is another way to understand the world’. These are harder strategies to adopt vis-à-vis achieving certain outcomes, and require sustained, broad-based commitment from society as well as the government – but they’re less violent because they don’t intend to dismantle entire belief systems to allow science to take that place.

But sadly, pop rationalism – and pop scientism – is easy to practice, to the point where the agenda of ‘rationalists’, those on local TV news as much as white libertarians in the US, seems to be to replace one poorly understood but disempowering belief system with another poorly understood but disempowering belief system that works to their advantage. Importantly, we must acknowledge that science can disempower, too; to quote from an older post:

[Ambedkar’s and Periyar’s] ire was essentially directed against autocratic personal governance that expected obedience through faith. In India, unless you’re a scientist and/or have received a good education, and can read English well enough to access the popular and, if need be, the technical literature, science is also reduced to a system founded on received knowledge and ultimately faith. There is a hegemony of science as well. Beyond the mythos of its own cosmology … there is also the matter of who controls knowledge production and utilisation.

(Another related issue is that many, if not most, members of the scientific community consider the harbouring of pseudoscientific beliefs to be implicitly wrong and implicitly deserving of excision. Many of us ‘educated’ ones have also taken for granted the political context that science often needs – typically, through the state’s support and endorsement – for the statement ‘pseudoscience is bad’ to be true. Pseudoscience can be wrong, as ‘rationalists’ often offer to check and prove, but why is it so eminently bad? I can think of one context in which it is certainly irrelevant.)

Ultimately, the public project must be to empower those we believe to be disempowered by creating opportunities for them to participate on their terms – instead of attempting to assimilate them into the hive-mind, to the point where they shed their cultural values, traditions and histories and adopt the ways of science, the ‘one true master’. If I could, I’d change the headline of Gorman’s article to ‘How Do Cats Gets Their Stripes? Their Genes Have the Answer.’ or something like that.

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

Pseudoscientific materials and thermoeconomics

The Shycocan Corp. took out a full-page jacket ad in the Times of India on June 22 – the same day The Telegraph (UK) had a story about GBP 2,900 handbags by Gucci that exist only online, in some videogame. The Shycocan product’s science is questionable, at best, though its manufacturers have disagreed vehemently with this assessment. (Anusha Krishnan wrote a fantastic article for The Wire Science on this topic). The Gucci ‘product’ is capitalism redigesting its own bile, I suppose – a way to create value out of thin air. This is neither new nor particularly exotic: I have paid not inconsiderable sums of money in the past for perks inside videogames, often after paying for the games themselves. But thinking about both products led me to a topic called thermoeconomics.

This may be too fine a point but the consumerism implicit in both the pixel-handbags and Shycocan and other medical devices of unproven efficacy has a significant thermodynamic cost. While pixel-handbags may represent a minor offense, so to speak, in the larger scheme of things, their close cousins, the non-fungible tokens (NFTs) of the cryptocurrency universe, are egregiously energy-intensive. (More on this here.) NFTs represent an extreme case of converting energy into monetary value, bringing into sharp focus the relationships between economics and thermodynamics that we often ignore because they are too muted.

Free energy, entropy and information are three of the many significant concepts at the intersection of economics and thermodynamics. Free energy is the energy available to perform useful work. Entropy is energy that is disorderly and can’t be used to perform useful work. Information, a form of negative entropy, and the other two concepts taken together are better illustrated by the following excerpt, from this paper:

Consider, as an example, the process of converting a set of raw materials, such as iron ore, coke, limestone and so forth, into a finished product—a piece of machinery of some kind. At each stage the organization (information content) of the materials embodied in the product is increased (the entropy is decreased), while global entropy is increased through the production of waste materials and heat. For example:

Extraction activities start with the mining of ores, followed by concentration or benefication. All of these steps increase local order in the material being processed, but only by using (dissipating) large quantities of available work derived from burning fuel, wearing out machines and discarding gauge and tailings.

Metallurgical reduction processes mostly involve the endothermic chemical reactions to separate minerals into the desired element and unwanted impurities such as slag, CO2 and sulfur oxides. Again, available work in the form of coal, oil or natural gas is used up to a much greater extent than is embodied in metal, and there is a physical wear and tear on machines, furnaces and so forth, which must be discarded eventually.

Petroleum refining involves fractionating the crude oil, cracking heavier fractions, and polymerizing, alkylating or reforming lighter ones. These processes require available work, typically 10% or so of the heating value of the petroleum itself. Petrochemical feedstocks such as olefins or alcohols are obtained by means of further endo- thermic conversion processes. Inorganic chemical processes begin by endothermic reduction of commonplace salts such as chlorides, fluorides or carbonates into their components. Again, available work (from electricity or fuel) is dissipated in each step.

Fabrication involves the forming of materials into parts with desirable forms and shapes. The information content, or orderliness, of the product is increased, but only by further expending available work.

Assembly and construction involves the linking of components into complex subsystems and systems. The orderliness of the product continues to increase, but still more available work is used up in the processes. The simultaneous buildup of local order and global entropy during a materials processing sequence is illustrated in figure 4. Some, but not all of the orderliness of the manufactured product is recoverable as thermodynamically available work: Plastic or paper products, for example, can be burned as fuel in a boiler to recover their residual heating value and con- vert some of that to work again. Using scrap instead of iron ore in the manufacture of steel or recycled aluminum instead of bauxite makes use of some of the work expended in the initial refining of the ore.

Some years ago, I read an article about a debate between a physicist and an economist; I’m unable to find the link now. The physicist says infinite economic growth is impossible because the laws of thermodynamics forbid it. Eventually, we will run out of free energy and entropy will become more abundant, and creating new objects will exact very high, and increasing, resource costs. The economist counters that what a person values doesn’t have to be encoded as objects – that older things can re-acquire new value or become more valuable, or that we will be able to develop virtual objects whose value doesn’t incur the same costs that their physical counterparts do.

This in turn recalls the concept of eco-economic decoupling – the idea that we can continue and/or expand economic activity without increasing environmental stresses and pollution at the same time. Is this possible? Are we en route to achieving it?

The Solar System – taken to be the limit of Earth’s extended neighbourhood – is very large but still finite, and the laws of thermodynamics stipulate that it can thus contain a finite amount of energy. What is the maximum number of dollars we can extract through economic activities using this energy? A pro-consumerist brigade believes absolute eco-economic decoupling is possible; at least one of its subscribers, a Michael Liebreich, has written that in fact infinite growth is possible. But NFTs suggest we are not at all moving in the right direction – nor does any product that extracts a significant thermodynamic cost with incommensurate returns (and not just economic ones). Pseudoscientific hardware – by which I mean machines and devices that claim to do something but have no evidence to show for it – belongs in the same category.

This may not be a productive way to think of problematic entities right now, but it is still interesting to consider that, given we have a finite amount of free energy, and that increasing the efficiency with which we use it is closely tied to humankind’s climate crisis, pseudoscientific hardware can be said to have a climate cost. In fact, the extant severity of the climate crisis already means that even if we had an infinite amount of free energy, thermodynamic efficiency is more important right now. I already think of flygskam in this way, for example: airplane travel is not pseudoscientific, but it can be irrational given its significant carbon footprint, and the privileged among us need to undertake it only with good reason. (I don’t agree with the idea the way Greta Thunberg does, but that’s a different article.)

To quote physicist Tom Murphy:

Let me restate that important point. No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years. A word of warning: that power plant is going to run a little warm. Thermodynamics require that if we generated sun-comparable power on Earth, the surface of the Earth—being smaller than that of the sun—would have to be hotter than the surface of the sun! …

The purpose of this exploration is to point out the absurdity that results from the assumption that we can continue growing our use of energy—even if doing so more modestly than the last 350 years have seen. This analysis is an easy target for criticism, given the tunnel-vision of its premise. I would enjoy shredding it myself. Chiefly, continued energy growth will likely be unnecessary if the human population stabilizes. At least the 2.9% energy growth rate we have experienced should ease off as the world saturates with people. But let’s not overlook the key point: continued growth in energy use becomes physically impossible within conceivable timeframes. The foregoing analysis offers a cute way to demonstrate this point. I have found it to be a compelling argument that snaps people into appreciating the genuine limits to indefinite growth.

And … And Then There’s Physics:

As I understand it, we can’t have economic activity that simply doesn’t have any impact on the environment, but we can choose to commit resources to minimising this impact (i.e., use some of the available energy to avoid increasing entropy, as Liebreich suggests). However, this would seem to have a cost and it seems to me that we mostly spend our time convincing ourselves that we shouldn’t yet pay this cost, or shouldn’t pay too much now because people in the future will be richer. So, my issue isn’t that I think we can’t continue to grow our economies while decoupling economic activity from environmental impact, I just think that we won’t.

A final point: information is considered negative entropy because it describes certainty – something we know that allows us to organise materials in such a way as to minimise disorder. However, what we consider to be useful information, thanks to capitalism, nationalism (it is not for nothing that Shycocan’s front-page ad ends with a “Jai Hind”), etc., has become all wonky, and all forms of commercialised pseudoscience are good examples of this.

‘Science people’

Two of the most annoying kinds of ‘science people’ I’ve come across on social media of late:

  • Those who perform rationalism – These people seem to know a small subset of things well and the rest on faith, and claim to know that “science can explain everything” without being able to explain it themselves. Champions of science’s right to explanation, typically to the exclusion of social and cultural influences and to the rejection of faith/religion. Often woke-types found explaining “science” they read in some paper and more often than not (and inadvertently) advancing scientistic positions.
  • Vocational practitioners of science – These people seem to know a small subset of things well but are unable to apply the fundamentals of what they’ve learnt to other topics, typically to the effect that we have well-educated people openly suspecting if vaccines cause disease or that China created the virus. Often engineers of some sort, probably because of the environments of entitlement in which they’re trained and subsequently employed, and frequently centrists.

Of course, a trait that partly defines these two groups is also a strong confounding factor: these are often the loudest people on the social media – so they get noticed more, while the quieter but likely more sensible people are noticed less, leading to inchoate observations like this one. However, these two groups of people remain the most annoying.

Poverty, psychology and pseudoscience

From the abstract of ‘Why Do People Stay Poor? Evidence on Poverty Traps from Rural Bangladesh’, November 24, 2020:

There are two broad views as to why people stay poor. One emphasizes differences in fundamentals, such as ability, talent or motivation. The other, poverty traps view, differences in opportunities stemming from differences in wealth. We exploit a large-scale, randomized asset transfer and panel data on 6000 households over an 11 year period to test between these two views. The data supports the poverty traps view — we identify a threshold level of initial assets above which households accumulate assets, take on better occupations and grow out of poverty. The reverse happens for those below the threshold.

In the resulting worldview this ‘condition’ imposes on people, it’s tempting to see justification for the existence of pseudoscientific enterprises like astrology. Actually, a faith-based binary like ‘requiring faith’ v. ‘not requiring faith’ may be more appropriate here than a science-based binary (‘scientific’ v. ‘unscientific’), if only to emphasise the presence of faith here over the absence of scientific reasoning. So that is, while I can’t ascertain a causal relationship between conditions like the poverty trap and opaque practices like astrology, there’s enough of a correlation here to understand astrology et al as the means by which people rationalise their shared predicament – a predicament that refuses to be allayed by their own efforts.

For example, astrology could provide social, mental and moral incentives for individuals to believe – without having to know – that they were denied any opportunities because ‘their time isn’t right’ and/or that they will continue to luck out, while social realities instead of the alignment of their stars will ensure this is true in some measure. Such faith could also subdue or redirect individuals’ anger or sense of wrongdoing at forces beyond their control, creating ground for social conditions that tolerate oppression more than it ought to be.

Another observation this paper brings to mind is from the work of Sendhil Mullainathan, among others. Researchers from various fields have reported differences in the way poor people make decisions, compared to those who aren’t poor – as if they were less intelligent. However, this perception arises from a sort of cognitive John-Henryism: that is, just as disadvantaged members of society – like Black people in the US – can incur a physical toll imposed by the need to fight for their rights, poor people incur a cognitive toll brought on by the limited availability of resources and the short-lived nature of good fortune.

This doesn’t mean poor people become or are less intelligent, or anything nonsensical like that. Instead, it means poor people’s priorities are different – for example the need for discounts on products, and to maximise absolute savings over percentage savings – in a way that those who aren’t poor may not find optimal for their needs, and that more tasks compete for their attention when they are short on the resources required to execute all of them. As Alice Walton wrote for the Chicago Booth Review in 2018,

In the Wheel of Fortune–style game, the researchers [including Mullainathan] measured how cognitively fatigued the players became. Logic would predict that rich players would be more fatigued, since they were allowed more turns to make more guesses. Instead, the researchers observed that poor players, having received fewer tries to guess at the answers, were more fatigued, having put more effort into each guess.

In an Angry Birds–style game in which people tried to shoot targets, rich players were given more chances to train a virtual slingshot on a target. Poor players, given fewer attempts, spent longer lining up their shots, and many scored more points per shot than rich players. For all the extra shots rich players had, they didn’t do as well, proportionally. “It seems that to understand the psychology of scarcity, we must also appreciate the psychology of abundance. If scarcity can engage us too much, abundance might engage us too little,” the researchers write.

This toll subsequently compromises future choices, and effectively installs another barrier, or trap, in front of people trying to go from being poor in one resource – money, in poverty’s case – to being rich. Walton offers a few examples of policymakers building on these findings to devise better schemes and improve uptake.

In India, where sugarcane farmers are paid annually after the harvest, farmers’ attention scores were the equivalent of 10 IQ points higher than just before the harvest, when farmers were relatively poor, according to data from the 2013 Science study

Offering subsidies or other incentives when people are more receptive to and have the spare capacity to consider them, such as after a harvest or a payday, may make a difference over the long run. One effort, in Tanzania, asked people to sign up for health insurance at cashpoint locations right after payday, and the timing led to a 20 percentage point increase in health-insurance use.

Introducing cognitive aids can help address the limited capacity for attention that may constrain people in poverty. In one study, it helped to show farmers research regarding the most productive ways to plant their crops. When poor, stressed, and in a scarcity mind-set, farmers had a harder time taking in the information. “This result has nothing to do with the intelligence of the farmers,” writes Bryan’s team. “A fact is only obvious if the observer has the spare attentional capacity to notice it.”

I wonder if the converse could also be true: that when homeopaths, phytotherapists, many Ayurveda practitioners and other quack healers offer dubious ways out of difficult healthcare situations, people who are short on attentional space could be likelier to buy into them in order to free up space for other tasks. If so, governments and activists may also need to consider fighting superstition and pseudoscience in healthcare by ensuring more legitimate outcomes – like visiting the local clinic or being able to procure a given drug – require as little cognitive bandwidth as possible.

“Enough science.”

Edit, 6.04 pm, December 15, 2020: A reader pointed out to me that The Guardian may in fact have been joking, and it has been known to be flippant on occasion. If this is really the case, I pronounce myself half-embarrassed for having been unable to spot a joke. But only half because it seems like a terrible joke, considering how proximate the real and the surreal having increasingly been, and because I still suspect it isn’t a joke. The astrologer in question is real, so to speak, and I doubt The Guardian wishes to ridicule her so.

From ‘How to watch the Jupiter and Saturn ‘great conjunction’ of 2020′, The Guardian, December 15, 2020:

I don’t know why The Guardian would print something like this. Beyond the shock of finding astrology – especially non-self-deprecating astrology – in the science section, it is outright bizarre for a question in an FAQ in this section to begin with the words ‘Enough science’.

To my mind The Guardian seems guilty of indulging the false balance that science and astrology are equally relevant and useful the same way the New York Times deemed that Democrats and Republicans in the US made equal amounts of sense in 2020 – by failing to find the courage to recognise that one side just wants to be stupid and/or reckless.

But while the New York Times did it for some principle it later discovered might have been wrong, what might The Guardian‘s excuse be? Revenue? I mean, not only has the astrologer taken the great opportunity she has to claim that there are bound to be astrological implications for everything, the astrology being quoted has also been accommodated under a question that suggests science and astrology are on equally legitimate footing.

This view harms science in the well-known way by empowering astrologists and in turn disempowering the tenets of reason and falsifiability – and in a less-known way by casting science in opposition to astrology instead of broaching the idea that science in fact complements the arts and the humanities. Put differently, the question also consigns science to being an oppositional, confrontational, negatory entity instead of allowing it a more amicable identity, as a human enterprise capable of coexisting with many other human enterprises.

For example, why couldn’t the question have been: “With the science, what opportunities might I have as a photographer?”, “With the science, what opportunities might I have as a poet seeking inspiration?” or even “Enough science. Break out the history.” In fact, if with its dogmatism astrology discourages deliberative decision-making and with its determinism suppresses any motivation one might have to remake one’s fate, it stands truly apart from the other things humans do that might serve to uplift them, and make them a better people. It is hard to imagine there is a reason here to celebrate astrology – except capital.

If revenue was really the reason The Guardian printed the astrology question, I admit none of these alternatives would make sense because there is no money in the arts and the humanities. I hope the newspaper will explain as to why this happened, and in the meantime, I think we could consider this a teaching moment on the fleeting yet consequential ways in which capital can shape the public understanding of science.

Ayurveda is not a science – but what does that mean?

This post has benefited immensely with inputs from Om Prasad.

Calling something ‘not a science’ has become a pejorative, an insult. You say Ayurveda is not a science and suddenly, its loudest supporters demand to know what the problem is, what your problem is, and that you can go fuck yourself.

But Ayurveda is not a science.

First, science itself didn’t exist when Ayurveda was first born (whenever that was but I’m assuming it was at least a millennium ago), and they were both outcomes of different perceived needs. So claiming ‘Ayurveda is a science’ makes little sense. You could counter that 5 didn’t stop being a number just because the number line came much later – but that wouldn’t make sense either because the relationship between 5 and the number line is nothing like the relationship between science and Ayurveda.

It’s more like claiming Carl Linnaeus’s choice of topics to study was normal: it wouldn’t at all be normal today but in his time and his particular circumstances, they were considered acceptable. Similarly, Ayurveda was the product of a different time, technologies and social needs. Transplanting it without ‘updating’ it in any way is obviously going to make it seem inchoate, stunted. At the same time, ‘updating’ it may not be so productive either.

Claiming ‘Ayurveda is a science’ is to assert two things: that science is a qualifier of systems, and that Ayurveda once qualified by science’s methods becomes a science. But neither is true for the same reason: if you want one of them to be like the other, it becomes the other. They are two distinct ways of organising knowledge and making predictions about natural processes, and which grew to assume their most mature forms along different historical trajectories. Part of science’s vaunted stature in society today is that it is an important qualifier of knowledge, but it isn’t of knowledge systems. This is ultimately why Ayurveda and science are simply incompatible.

One of them has become less effective and less popular over time – which should be expected because human technologies and geopolitical and social boundaries have changed dramatically – while the other is relatively more adolescent, more multidisciplinary (with the right opportunities) and more resource-intensive – which should be expected because science, engineering, capitalism and industrialism rapidly co-evolved in the last 150 years.

Second, ‘Ayurveda is a science’ is a curious statement because those who utter it typically wish to elevate it to the status science enjoys and at the same time wish to supplant answers that modern science has provided to some questions with answers by Ayurveda. Of course, I’m speaking about the average bhakt here – more specifically a Bharatiya Janata Party supporter seemingly sick of non-Indian, especially Western, influences on Indian industry, politics, culture (loosely defined) and the Indian identity itself, and who may be actively seeking homegrown substitutes. However, their desire to validate Ayurveda according to the practices of modern science is really an admission that modern science is superior to Ayurveda despite all their objections to it.

The bhakt‘s indignation when confronted with the line that ‘Ayurveda is not a science’ is possibly rooted in the impression that ‘science’ is a status signal – a label attached to a collection of precepts capable of together solving particular problems, irrespective of more fundamental philosophical requirements. However, the only science we know of is the modern one, and to the bhakt the ‘Western’ one – both in provenance and its ongoing administration – and the label and the thing to which it applies, i.e. the thing as well as the name of the thing, are convergent.

There is no other way of doing science; there is no science with a different set of methods that claims to arrive at the same or ‘better’ scientific truths. (I’m curious at this point if, assuming a Kuhnian view, science itself is unfalsifiable as it attributes inconsistencies in its constituent claims to extra-scientific causes than to flaws in its methods themselves – so as a result science as a system can reach wrong conclusions from time to time but still be valid at all times.)

It wouldn’t be remiss to say modern science, thus science itself, is to the nationalistic bhakt as Ayurveda is to the nationalistic far-right American: a foreign way of doing things that must be resisted, and substituted with the ‘native’ way, however that nativity is defined. It’s just that science, specifically allopathy, is more in favour today because, aside from its own efficacy (a necessary but not sufficient condition), all the things it needs to work – drug discovery processes, manufacturing, logistics and distribution, well-trained health workers, medical research, a profitable publishing industry, etc. – are modelled on institutions and political economies exported by the West and embedded around the world through colonial and imperial conquests.

Third: I suspect a part of why saying ‘Ayurveda is not a science’ is hurtful is that Indian society at large has come to privilege science over other disciplines, especially the social sciences. I know too many people who associate the work of many of India’s scientists with objectivity, a moral or political nowhereness*, intellectual prominence, pride and, perhaps most importantly, a willingness to play along with the state’s plans for economic growth. To be denied the ‘science’ tag is to be denied these attributes, desirable for their implicit value as much as for the opportunities they are seen to present in the state’s nationalist (and even authoritarian) project.

On the other hand, social scientists are regularly cast in opposition to these attributes – and more broadly by the BJP in opposition to normative – i.e. pro-Hindu, pro-rich – views of economic and cultural development, and dismissed as such. This ‘science v. fairness’ dichotomy is only a proxy battle in the contest between respecting and denying human rights – which in turn is also represented in the differences between allopathy and Ayurveda, especially when they are addressed as scientific as well as social systems.

Compared to allopathy and allopathy’s intended outcomes, Ayurveda is considerably flawed and very minimally desirable as an alternative. But on the flip side, uptake of alternative traditions is motivated not just by their desirability but also by the undesirable characteristics of allopathy itself. Modern allopathic methods are isolating (requiring care at a designated facility and time away from other tasks, irrespective of the extent to which that is epidemiologically warranted), care is disempowering and fraught with difficult contradictions (“We expect family members to make decisions about their loved ones after a ten-minute briefing that we’re agonising over even with years of medical experience”**), quality of care is cost-stratified, and treatments are condition-specific and so require repeated hospital visits in the course of a lifetime.

Many of those who seek alternatives in the first place do so for these reasons – and these reasons are not problems with the underlying science itself. They’re problems with how medical care is delivered, how medical knowledge is shared, how medical research is funded, how medical workers are trained – all subjects that social scientists deal with, not scientists. As such, any alternative to allopathy will become automatically preferred if it can solve these economic, political, social, welfare, etc. problems while delivering the same standard of care.

Such a system won’t be an entirely scientific enterprise, considering it would combine the suggestions of the sciences as well as the social sciences into a unified whole such that it treated individual ailments without incurring societal ones. Now, say you’ve developed such an alternative system, called PXQY. The care model at its heart isn’t allopathy but something else – and its efficacy is highest when it is practised and administered as part of the PXQY setup, instead of through standalone procedures. Would you still call this paradigm of medical care a science?

* Akin to the ‘view from nowhere’.
** House, S. 2, E 18.

Featured image credit: hue 12 photography/Unsplash.

Journalistic entropy

Say you need to store a square image 1,000 pixels wide to a side with the smallest filesize (setting aside compression techniques). The image begins with the colour #009900 on the left side and, as you move towards the right, gradually blends into #1e1e1e on the rightmost edge. Two simple storage methods come to mind: you could either encode the colour-information of every pixel in a file and store that file, or you could determine a mathematical function that, given the inputs #009900 and #1e1e1e, generates the image in question.

The latter method seems more appealing, especially for larger canvases of patterns that are composed by a single underlying function. In such cases, it should obviously be more advantageous to store the image as an output of a function to achieve the smallest filesize.

Now, in information theory (as in thermodynamics), there is an entity called entropy: it describes the amount of information you don’t have about a system. In our example, imagine that the colour #009900 blends to #1e1e1e from left to right save for a strip along the right edge, say, 50 pixels wide. Each pixel in this strip can assume a random colour. To store this image, you’d have to save it as an addition of two functions: ƒ(x, y), where x = #009900 and y = #1e1e1e, plus one function to colour the pixels lying in the 50-px strip on the right side. Obviously this will increase the filesize of the stored function.

Even more, imagine if you were told that 200,000 pixels out of the 1,000,000 pixels in the image would assume random colours. The underlying function becomes even more clumsy: an addition of ƒ(x, y) and a function R that randomly selects 200,000 pixels and then randomly colours them. The outputs of this function R stands for the information about the image that you can’t have beforehand; the more such information you lack, the more entropy the image is said to have.

The example of the image was simple but sufficiently illustrative. In thermodynamics, entropy is similar to randomness vis-à-vis information: it’s the amount of thermal energy a system contains that can’t be used to perform work. From the point of view of work, it’s useless thermal energy (including heat) – something that can’t contribute to moving a turbine blade, powering a motor or motivating a system of pulleys to lift weights. Instead, it is thermal energy motivated by and directed at other impetuses.

As it happens, this picture could help clarify, or at least make more sense of, a contemporary situation in science journalism. Earlier this week, health journalist Priyanka Pulla discovered that the Indian Council of Medical Research (ICMR) had published a press release last month, about the serological testing kit the government had developed, with the wrong specificity and sensitivity data. Two individuals she spoke to, one from ICMR and another from the National Institute of Virology, Pune, which actually developed the kit, admitted the mistake when she contacted them. Until then, neither organisation had issued a clarification even though it was clear both individuals are likely to have known of the mistake at the time the release was published.

Assuming for a moment that this mistake was an accident (my current epistemic state is ‘don’t know’), it would indicate ICMR has been inefficient in the performance of its duties, forcing journalists to respond to it in some way instead of focusing on other, more important matters.

The reason I’m tending to think of such work as entropy and not work per se is such instances, whereby journalists are forced to respond to an event or action characterised by the existence of trivial resolutions, seem to be becoming more common.

It’s of course easier to argue that what I consider trivial may be nontrivial to someone else, and that these events and actions matter to a greater extent than I’m willing to acknowledge. However, I’m personally unable to see beyond the fact that an organisation with the resources and, currently, the importance of ICMR shouldn’t have had a hard time proof-reading a press release that was going to land in the inboxes of hundreds of journalists. The consequences of the mistake are nontrivial but the solution is quite trivial.

(There is another feature in some cases: of the absence of official backing or endorsement of any kind.)

So as such, it required work on the part of journalists that could easily have been spared, allowing journalists to direct their efforts at more meaningful, more productive endeavours. Here are four more examples of such events/actions, wherein the non-triviality is significantly and characteristically lower than that attached to formal announcements, policies, reports, etc.:

  1. Withholding data in papers – In the most recent example, ICMR researchers published the results of a seroprevalence survey of 26,000 people in 65 districts around India, and concluded that the prevalence of the novel coronavirus was 0.73% in this population. However, in their paper, the researchers include neither a district-wise breakdown of the data nor the confidence intervals for each available data-point even though they had this information (it’s impossible to compute the results the researchers did without these details). As a result, it’s hard for journalists to determine how reliable the results are, and whether they really support the official policies regarding epidemic-control interventions that will soon follow.
  2. Publishing faff – On June 2, two senior members of the Directorate General of Health services, within India’s Union health ministry, published a paper (in a journal they edited) that, by all counts, made nonsensical claims about India’s COVID-19 epidemic becoming “extinguished” sometime in September 2020. Either the pair of authors wasn’t aware of their collective irresponsibility or they intended to refocus (putting it benevolently) the attention of various people towards their work, turning them away from the duo deemed embarrassing or whatever. And either way, the claims in the paper wound their way into two news syndication services, PTI and IANS, and eventually onto the pages of a dozen widely-read news publications in the country. In effect, there were two levels of irresponsibility at play: one as embodied by the paper and the other, by the syndication services’ and final publishers’ lack of due diligence.
  3. Making BS announcements – This one is fairly common: a minister or senior party official will say something silly, such as that ancient Indians invented the internet, and ride the waves of polarising debate, rapidly devolving into acrimonious flamewars on Twitter, that follow. I recently read (in The Washington Post I think, but I can’t find the link now) that it might be worthwhile for journalists to try and spend less time on fact-checking a claim than it took someone to come up with that claim. Obviously there’s no easy way to measure the time some claims took to mature into their present forms, but even so, I’m sure most journalists would agree that fact-checking often takes much longer than bullshitting (and then broadcasting). But what makes this enterprise even more grating is that it is orders of magnitude easier to not spew bullshit in the first place.
  4. Conspiracy theories – This is the most frustrating example of the lot because, today, many of the originators of conspiracy theories are television journalists, especially those backed by government support or vice versa. While fully acknowledging the deep-seated issues underlying both media independence and the politics-business-media nexus, numerous pronouncements by so many news anchors have only been akin to shooting ourselves in the foot. Exhibit A: shortly after Prime Minister Narendra Modi announced the start of demonetisation, a beaming news anchor told her viewers that the new 2,000-rupee notes would be embedded with chips to transmit the notes’ location real-time, via satellite, to operators in Delhi.

Perhaps this entropy – i.e. the amount of journalistic work not available to deal with more important stories – is not only the result of a mischievous actor attempting to keep journalists, and the people who read those journalists, distracted but is also assisted by the manifestation of a whole industry’s inability to cope with the mechanisms of a new political order.

Science journalism itself has already experienced a symptom of this change when pseudoscientific ideas became more mainstream, even entering the discourse of conservative political groups, including that of the BJP. In a previous era, if a minister said something, a reporter was to drum up a short piece whose entire purpose was to record “this happened”. And such reports were the norm and in fact one of the purported roots of many journalistic establishments’ claims to objectivity, an attribute they found not just desirable but entirely virtuous: those who couldn’t be objective were derided as sub-par.

However, if a reporter were to simply report today that a minister said something, she places herself at risk of amplifying bullshit to a large audience if what the minister said was “bullshit bullshit bullshit”. So just as politicians’ willingness to indulge in populism and majoritarianism to the detriment of society and its people has changed, so also must science journalism change – as it already has with many publications, especially in the west – to ensure each news report fact-checks a claim it contains, especially if it is pseudoscientific.

In the same vein, it’s not hard to imagine that journalists are often forced to scatter by the compulsions of an older way of doing journalism, and that they should regroup on the foundations of a new agreement that lets them ignore some events so that they can better dedicate themselves to the coverage of others.

Featured image credit: Татьяна Чернышова/Pexels.

The life and death of ‘Chemical Nova’

You know how people pretend to win an Oscar or a Nobel Prize, right? Many years ago, I used to pretend to be the author of a fictitious but, blissfully unmindful of its fictitiousness, award-winning series of articles entitled Chemical Nova. In this series, I would pretend that each article discussed a particular point of intersection between science and culture.

The earliest idea I had along these lines concerned soap. I would daydream about how I was celebrated for kickstarting a social movement that prized access to soap and ability to wash one’s hands under running water, and with this simple activity beat back the strange practice among many of refusing to wash one’s toilet oneself, instead delegating the apparently execrable task to a housemaid.

The fantastic value of Chemical Nova should be obvious: it represented, at least to me, the triumph of logic and reasoning above class-commitments and superstition. The fantasy took shape out of my longstanding ambition to beat down a stubborn Creature, for many years shapeless, that often caused a good review, essay or news report to inspire only cynicism, derision and eventually dismissal on the part of many readers. It was quickly apparent that the Creature couldn’t be subdued with deductive reasoning alone, but for which one had to take recourse through politics and individual aspirations as well, no matter how disconnected from the pretentious ‘quest for truth’ these matters were.

Chemical Nova dissipated for a few years as I set about becoming a professional journalist – until I had occasion to remember it after Narendra Modi’s election as prime minister in 2014. And quickly enough, it seemed laughable to me that I had assumed upper-caste people wouldn’t know how soap worked, or at least of its cleansing properties. An upper-caste individual invested in the continuation of manual scavenging would simply feel less guilty with a bar of soap placed in his dirty bathroom: for scavengers to wash their hands and not be at risk of contracting any diseases.

The belief that ‘the job is theirs to perform’ could then persist unfettered, rooted as it was in some sort of imagined befoulment of the soul – something one couldn’t cleanse, out of reach of every chemical reagent, or even affect in any way except through a lifetime of suffering.

It was a disappointing thought, but in my mind, there was still some hope for Chemical Nova. Its path was no longer straightforward at all insofar as it had to first make the case that the mind, the body and the community are all that matter, that that’s how one’s soul really takes shape, but its message – “ultimately, wash your hands” – still was an easy one to get across. I was tempted and I continued to wait.

However, earlier today, the Creature bared itself fully, exposing not itself as much as the futility of ideas like Chemical Nova. An advertisement appeared in a newspaper displaying a pair of hands kneading some dough, with the following caption: “Are you allowing your maid to knead atta dough by hand? Her hands may be infected.” The asset encouraged readers of the newspaper to buy Kent’s “atta maker & bread maker” instead, accompanied by a photograph of Hema Malini smiling in approval.

Malini has been the brand ambassador for Kent since 2007 and the incumbent Lok Sabha MP from Mathura since 2014. I’m not sure of the extent to which she knew of the advertisement’s contents before her face (and her daughter’s) appeared on it. Her affiliation since 2004 with the Bharatiya Janata Party (BJP), known for its favouritism towards upper-caste Hindus (to put it mildly), doesn’t inspire confidence but at the same time, it’s quite possible that Malini’s contract with Kent allows the company to include her face in promotional materials for a predefined set of products without requiring prior approval in each instance.

But even if Malini had never been associated with the product or the brand, Chemical Nova would have taken a hit because I had never imagined that the Creature could one day be everywhere at once. The chairman of Kent has since apologised for the advertisement, calling it “unintentional” and “wrongly communicated”. But it seems to me that Kent and the ad agency it hired continue to err because they don’t see the real problem: that they wrote those words down and didn’t immediately cringe, that those words were okayed by many pairs of eyes before they were printed.

The triumph of reason and the immutability of chemical reagents are pointless. The normalisation of exclusion, of creating an ‘other’ who embodies everything the in-group finds undesirable, is not new – but it has for the most part been driven by a top-down impulse that often originates in the offices of Narendra Modi, Amit Shah or some senior BJP minister, and often to distract from some governmental failure. But in the coronavirus pandemic, the act of ‘othering’ seems to have reached community transmission just as fast as the virus may have, finding widespread expression without any ostensible prompt.

And while Kent has been caught out evidently because it was the ‘loudest’, I wonder how many others don’t immediately see that what they are writing, saying, hearing or reading is wrong, and let it pass. As Arundhati Roy wrote earlier this week, the attainment of ‘touchlessness’ seems to be the new normal: in the form of a social condition in which physical distance becomes an excuse to revive and re-normalise untouchabilities that have become taboo – in much the same way soap became subsumed by the enterprise it should have toppled.

Examples already abound, with ministers and corporate uncles alike touting the prescient wisdom of our Hindu ancestors to greet others with a namaste instead of shaking hands; to maintain aachaaram, a collection of gendered practices many of which require the (Brahmin) practitioner to cleanse themselves of ‘spiritual dirt’ through habits and rituals easily incorporated into daily life; and now, to use machines that promise to render, in Roy’s words, “the very bodies of one class … as a biohazard to another”.

It started with a bang, but Chemical Nova slips quietly into the drain, and out of sight, for it is no match for its foe – the Creature called wilful ignorance.

Featured image: A snapshot of William Blake’s ‘The Great Red Dragon and the Woman Clothed with the Sun’, c. 1805-1810.

Nitin Gadkari, tomato chutney and blood

There is a famous comedy scene in Tamil cinema, starring the actors Vadivelu and ‘Bonda’ Mani. Those who understand Tamil should skip this awkward retelling – intended for non-Tamil speakers, to the video below and the post after. Vadivelu has blood all over his face due to an injury when ‘Bonda’ Mani walks up to him and asks why he’s got tomato chutney all over his face. Vadivelu looks stunned, and punches ‘Bonda’ Mani on the nose. Mani reaches a finger to his nose to find blood and cries out that he’s bleeding. Then Vadivelu asks, “If I have red stuff on my face it’s tomato chutney, but on your face it’s blood, eh?”

It would seem Vadivelu spoke what he did for many millions of us today wondering how exactly the Indian government designed its unique response to the novel coronavirus pandemic. One of the centrepieces of its response has been to punish journalists, by shutting them down or in many cases slapping them with nothing less than sedition charges, when journalists are critical of the government or seem to be asking uncomfortable questions. On the other hand, pseudoscientific claims that can directly cause harm, what with us being in the middle of a health emergency, are let off without so much as a slap on the wrist when they’re pronounced by journalists in pro-right-wing newsrooms or – as it often happens – by ministers in the government itself.

Nitin Gadkari, the Union minister of road transport and highways, has told NDTV that he believes the novel coronavirus was not natural and that it was made in a lab. Another BJP member, this one a state-level office-bearer, had some time back said something similarly idiotic, prompting a rare rebuke from Union minister Prakash Javadekar. But I doubt Javadekar is going to mete the same treatment out to Gadkari – his equal, so to speak – in public, and it’s what’s in the public domain that matters. So if there’s red stuff all over a journalist’s face, it’s tomato chutney, even if it’s actually blood. But on a minister’s face, it’s always blood even when it’s actually tomato chutney. And the government and its foot-soldiers have conditioned themselves as well as >30% of the country to follow this rule.

Second, NDTV is also complicit in the ignorance, irresponsibility and recklessness on display here because its report simply says Gadkari said what he did, without so much as a note mentioning that he’s wrong. The reason is that what Gadkari, Javadekar – who recently vowed to “expose” those who ranked India poorly in press-freedom indices – and their colleagues, including Prime Minister Narendra Modi himself, have done is hack journalism, at least journalism as it used to be practiced, with editors and reporters stubborn about not taking sides.

This culture of journalism was valid when, simply put, all political factions advanced equally legitimate arguments. And according to Modi et al, his government and colleagues are also advancing arguments that are as legitimate as – often if not more legitimate than – those in the opposition. But there’s often plain and simple evidence that these claims are wrong, often rooted in scientific knowledge (which is why Modi et al have been undermining “Western science” from the moment they assumed power in 2014). Journalists can’t treat both sides as equals anymore – whether they be the Left and the Right, the conservatives and the liberals or the progressives and the dogmatists – because one side, whether by choice or fate, has incorporated pseudoscience into its political ideals.

Now, sans a note that Gadkari is really spouting rubbish and that we have enough evidence to reject the idea that it was human-made and accept that it evolved naturally[1], NDTV is not – as it may believe – staying neutral as much as being exploited by Gadkari as a way to have his words amplified. NDTV is effectively complicit, bringing Gadkari’s unqualified nonsense to millions of its readers, many of them swayed as much by the authority and political beliefs of the claimant as others are by the weight or paucity of evidence.

Indeed, the news channel may itself be consciously playing to both sides: (i) those who know exactly why the minister and others who make such claims are wrong, joined increasingly by unthinkers who need to and do say fashionable things without understanding why what they’re saying is right (often the same people that place science in wrongful opposition to religion, social science and/or tradition); and (ii) the allegedly disenfranchised folks paranoid about everything that isn’t Indian and/or homegrown, and have since become unable to tell cow urine from a medicinal solution.

[1] I read some time ago that Bertrand Russell was once asked what he would say to god if he died and came face to face with an almighty creator. Russell, a famous skeptic of various religious beliefs, apparently said he would accuse god of not providing enough evidence of the latter’s existence. I don’t know if this story is true but Russell’s argument, as claimed, makes a lot of sense, doesn’t it? In the context of Gadkari’s comment, and Luc Montagnier’s before him, complete evidence differs significantly from sufficient evidence., and it’s important to account for sufficiency in arguments concerning the novel coronavirus as well. For example, the people who believe the novel coronavirus originated in a lab are called conspiracy theorists not because they have an alternative view – as they often claim in defence – but because most of their arguments use the fallacy of the converse: that if there isn’t sufficient evidence to prove the virus evolved in nature, it must have originated in a lab. Similarly, I and many others are comfortable claiming the virus evolved naturally because there is sufficient evidence to indicate that it did. For the same reason, I also think I and many others can be proven wrong only if new information emerges.

Featured image: Union minister Nitin Gadkari, 2014. Credit: Press Information Bureau.