On AWSAR, Saransh, etc.

The Indian National Young Academy of Sciences has announced a “thesis competition for PhD students” called ‘Saranash’. A PhD student will have three minutes, and three slides, to describe their work via video, and winners stand to receive Rs 10,000, Rs 6,000 and Rs 4,000 in cash, for the first three places. It’s a good opportunity, quite like the Department of Science and Technology’s ‘Augmenting Writing Skills for Articulating Research’ (AWSAR) programme, in which PhDs and postdocs in any science stream are invited to share short articles based on the following criteria (among others):

Entries would be invited from research scholars and PDFs who wish to publish their research in way that would interest non-scientific audiences. The story should focus on the answering the queries such as why does my research matter? Why is it important? Why does it interest researchers? Why should it interest the reader? objectively. The article must be based on the research undertaken by the individual researcher.

My question is: why do both AWSAR and Saransh ask students to communicate their own work? Is this a conscious decision on the part of the governing bodies or is it the opposite – a lack of application of mind? I think the difference matters because it’s no secret that effective communication of any form, and on any level, is nascent at best in this part of the world. This is why initiatives like AWSAR and Saransh exist in the first place. This said, if the decision to have participants write about their own work is an attempt to foster communication by eliminating one group of variables, of deciding which other work to pick and then assimilating it, that’s great – if it is going to be followed up and nurtured in some way.

For example, what happens to a participant after they win an AWSAR award, and what happens to their work? I think it lies idle, and will probably wind its way to an archive or compilation that a few people will visit/read; and the participant will presumably continue with their science work. (I raised this issue at the meeting with the Principal Scientific Advisor in January 2020; his colleagues made a note of it, but then COVID-19 happened and I don’t have my hopes of change up.) The AWSAR website also says “all awardees will be given an opportunity to attend Science Film Training Workshop organised by Vigyan Prasar”.

As such, it seems, AWSAR assumes that those who are interested enough to participate will also continue to communicate their work at regular intervals, and work to improve themselves. This is clearly far-fetched. The ramp should be longer and reach higher, leading up to a point where effective communication becomes second nature. And if the first step is to present one’s own work, the logical next is to present someone else’s work; ultimately, useful communication will require one to do both. And both AWSAR and Saransh, by virtue of being initiatives that already recognise the value of communicating science to an audience of non-experts, are well-placed to make this happen. At the least, they need to find some way to emphasise that communication is an endless process.

(One simple solution came to mind – to require winning students to use their prize-money on communication-related efforts, such as to start a blog or produce a multimedia story for publication in the press. This is related to another idea tossed around at the January 2020 meeting, that the Principal Scientific Advisor’s office help set up a network of journalistic editors with whom scientific communicators could consult. But where money from the government is concerned, the first thing that comes to mind is its failure to pay science students’ fellowship amounts on time – often being delayed by many months, even during the COVID-19 epidemic – so to be fair there ought to be no say in how students choose to spend their money.)

But if I’ve assumed wrong, and both competitions focus on communicating one’s work because they don’t see the difference between that and communicating something they haven’t spent a few years studying – leading all the way up to an absolute ignorance of issues like conflicts of interest (too many scientists take offence when I tell them this is why I’m turning their article, on their own research paper, down) – then AWSAR, Saransh, etc. could easily become gateways to a ‘corrupt’ form of communication that is synonymous with serving one’s own interests.

A similar symptom of these programmes’ organisers not having thought things through is that the eligibility criteria make no mention of how participants can and can’t communicate their work. The AWSAR and Saransh web-pages are special in the sense that they will be visited predominantly by people who aren’t yet prolific communicators but are interested in the art. As such, including, say, a suggestion that participants should not treat their audience as one big empty vessel, or an opportunity to engage in discussions with audience-members (instead of restricting that to Qs and As or, in Saransh’s case, queries from jury members), could ensure in a significant way that many people’s future efforts evolve from the right substrate of principles.

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

Scicommers as knowledge producers

Reading the latest edition of Raghavendra Gadagkar’s column in The Wire Science, ‘More Fun Than Fun’, about how scientists should become communicators and communicators should be treated as knowledge-producers, I began wondering if the knowledge produced by the latter is in fact not the same knowledge but something entirely new. The idea that communicators simply make the scientists’ Promethean fire more palatable to a wider audience has led, among other things, to a belief widespread among scientists that science communicators are adjacent to science and aren’t part of the enterprise producing ‘scientific knowledge’ itself. And this perceived adjacency often belittles communicators by trivialising the work that they do.

Explanatory writing that “enters into the mental world of uninitiated readers and helps them understand complex scientific concepts”, to use Gadagkar’s words, takes copious and focused work. (And if it doesn’t result in papers, citations and h-indices, just as well: no one should become trapped in bibliometrics the way so many scientists have.) In fact, describing the work of communicators in this way dismisses a specific kind of proof of work that is present in the final product – in much the same way scientists’ proofs of work are implicit in new solutions to old problems, development of new technologies, etc. The knowledge that people writing about science for a wider audience produce is, in my view, entirely distinct, even if the nature of the task at hand is explanatory.

In his article, Gadagkar writes:

Science writers should do more than just reporting, more than translating the gibberish of scientists into English or whatever language they may choose to write in. … Science writers are in a much better position to make lateral comparisons, understand the process of science, and detect possible biases and conflicts of interest, something that scientists, being insiders, cannot do very well. So rather than just expect them to clean up our messy prose, we should elevate science writers to the role of knowledge producers.

My point is about knowledge arising from a more limited enterprise – i.e. explanation – but which I think can be generalised to all of journalism as well (and to other expository enterprises). And in making this point, I hope my two-pronged deviation from Gadagkar’s view is clear. First, science journalists should be treated as knowledge producers, but not in the limited confines of the scientific enterprise and certainly not just to expose biases; instead, communicators as knowledge producers exist in a wider arena – that of society, including its messy traditions and politics, itself. Here, knowledge is composed of much more than scientific facts. Second, science journalists are already knowledge producers, even when they’re ‘just’ “translating the gibberish of scientists”.

Specifically, the knowledge that science journalists produce differs from the knowledge that scientists produce in at least two ways: it is accessible and it makes knowledge socially relevant. What scientists find is not what people know. Society broadly synthesises knowledge from information that it weights together with extra-scientific considerations, including biases like “which university is the scientist affiliated with” and concerns like “will the finding affect my quality of life”. Journalists are influential synthesisers who work with or around these and other psychosocial stressors to contextualise scientific findings, and thus science itself. Even when they write drab stories about obscure phenomena, they make an important choice: “this is what the reader gets to read, instead of something else”.

These properties taken together encompass the journalist’s proof of work, which is knowledge accessible to a much larger audience. The scientific enterprise is not designed to produce this particular knowledge. Scientists may find that “leaves use chlorophyll to photosynthesise sunlight”; a skilled communicator will find that more people know this, know why it matters and know how they can put such knowledge to use, thus fostering a more empowered society. And the latter is entirely new knowledge – akin to an emergent object that is greater than the sum of its scientific bits.

On the lab-leak hypothesis

One problem with the debate over the novel coronavirus’s “lab leak” origin hypothesis is a problem I’m starting to see in quite a few other areas of pandemic-related analysis and discussion. It’s that no one will say why others are wrong, even as they insist others are, and go on about why they are right.

Shortly after I read Nicholas Wade’s 10,000-word article on Medium, I pitched a summary to a medical researcher, whose first, and for a long time only, response was one word: “rubbish”. Much later, he told me about how the virus could have evolved and spread naturally. Even if I couldn’t be sure if he was right, having no way to verify the information except to bounce it off a bunch of other experts, I was sure he thought he was right. But how was Wade wrong? I suspect for many people the communication failures surrounding this (or a similar) question may be a sticking point.

(‘Wade’, after the first mention, is shorthand for an author of a detailed, non-trivial article that considers the lab-leak hypothesis, irrespective of what conclusion it reaches. I’m cursorily aware of Wade’s support for ‘scientific racism’, and by using his name, I don’t condone any of his views on these and other matters. Other articles to read on the lab-leak topic include Nicholson Baker’s in Intelligencer and Katherine Eban’s in Vanity Fair.)

We don’t know how the novel coronavirus originated, nor are we able to find out easily. There are apparently two possibilities: zoonotic spillover and lab-leak (both hypotheses even though the qualification has been more prominently attached to the latter).

Quoting two researchers writing in The Conversation:

In March 2020, another article published in Nature Medicine provided a series of scientific arguments in favour of a natural origin. The authors argued: The natural hypothesis is plausible, as it is the usual mechanism of emergence of coronaviruses; the sequence of SARS-CoV-2 is too distantly related from other known coronaviruses to envisage the manufacture of a new virus from available sequences; and its sequence does not show evidence of genetic manipulation in the laboratory.

Proponents of the lab-leak hypothesis (minus the outright-conspiratorial) – rather more broadly the opponents of the ‘zoonotic-spillover’-evangelism – have argued that lab leaks are more common than we think, the novel coronavirus has some features that suggest the presence of a human hand, and a glut of extra-scientific events that point towards suspicious research and communication by members of the Wuhan Institute of Virology.

However, too many counterarguments to Wade’s and others’ articles along similar lines have been to brush the allegations aside, as if they were so easily dismissed – like my interlocutor’s “rubbish”. And it’s an infuriating response. To me at least (as someone who’s been at the receiving end of many such replies), it smacks of an attitude that seems to say (a) “you’re foolish to take this stuff seriously,” (b) “you’re being a bad journalist,” (c) “I doubt you’ll understand the answer,” and (d) “I think you should just trust me”.

I try not to generalise (c) and (d) to maintain my editorial equipoise, so to speak – but it’s been hard. There’s too much of too many scientists going around insisting we should simply listen to them, while making no efforts to ensure non-experts can understand what they’re saying, much less admitting the possibility that they’re kidding themselves (although I do think “science is self-correcting” is a false adage). In fact, proponents of the zoonotic-spillover hypothesis and others like to claim that their idea is more likely, but this is often a crude display of scientism: “it’s more scientific, therefore it must be true”. The arguments in favour of this hypothesis are also being increasingly underrepresented outside the scientific literature, which isn’t a trivial consideration because the disparity could exacerbate the patronising tone of (c) and (d), and render scientists less trustworthy.

Science communication and/or journalism are conspicuous by absence here, but I also think the problem with the scientists’ attitude is broader than that. Short of engaging directly in the activities of groups like DRASTIC, journalists take a hit when scientists behave like pedagogic communication is a waste of time. More scientists should make more of an effort to articulate themselves better. It isn’t wise to dismiss something that so many take seriously – although this is also a slippery slope: apply it as a general rule, and soon you may find yourself having to debunk in great detail a dozen ridiculous claims a day. Perhaps we can make an exception for the zoonotic-spillover v. lab-leak hypotheses contest? Or is there a better heuristic? I certainly think there should be one instead of having none at all.

Proving the absence is harder than proving the presence of something, and that’s why everyone might be talking about why they’re right. However, in the process, many of these people seem to forget that what they haven’t denied is still firmly in the realm of the possible. Actually, they don’t just forget it but entirely shut down the idea. This is why I agree with Dr Vinay Prasad’s words in MedPage Today:

If it escaped due to a wet market, I would strongly suggest we clean up wet markets and improve safety in BSL laboratories because a future virus could come from either. And, if it was a lab leak, I would strongly suggest we clean up wet markets and improve safety in BSL 3 and 4 … you get the idea. Both vulnerabilities must be fixed, no matter which was the culprit in this case, because either could be the culprit next time.

His words provide an important counterweight of sorts to a tendency from the zoonotic-spillover quarter to treat articles about the lab-leak possibility as a monolithic allegation instead of as a collection of independent allegations that aren’t equally unlikely. For example, the Vanity Fair, Newsweek and Wade’s articles have all also called into question safety levels at BSL 3 and 4 labs, whether their pathogen-handling protocols sufficiently justify the sort of research we think is okay to conduct, and allegations that various parties have sought to suppress information about the activities at such facilities housed in the Wuhan Institute.

I don’t buy the lab-leak hypothesis and I don’t buy the zoonotic-spillover hypothesis; in fact, I don’t personally care for the answer because I have other things to worry about, but I do buy that the “scientific illiberalism” that Dr Prasad talks about is real. And it’s tied to other issues doing the rounds now as well. For example, Newsweek‘s profile of DRASTIC’s work has been a hit in India thanks to the work of ‘The Seeker’, the pseudonym for a person in their 20s living in “Eastern India”, who uncovered some key documents that cast suspicion on Wuhan Institute’s Shi Zhengli’s claims vis-à-vis SARS-CoV-2. And two common responses to the profile (on Twitter) have been:

  1. “In 2020, when people told me about the lab-leak hypothesis, I dismissed them and argued that they shouldn’t take WhatsApp forwards seriously.”
  2. “Journalism is redundant.”

(1) is said as if it’s no longer true – but it is. The difference between the WhatsApp forwards of February-April 2020 and the articles and papers of 2021 is the body of evidence each set of claims was based on. Luc Montagnier was wrong when he spoke against the zoonotic-spillover hypothesis last year simply because his reasoning was wrong. The reasons and the evidence matter; otherwise, you’re no better than a broken clock. Facile WhatsApp forwards and right-wingers’ ramblings continue to deserve to be treated with extreme scepticism.

Just because a conspiracy theory is later proven to have merit doesn’t make it not a conspiracy theory; their defining trait is belief in the absence of evidence. The most useful response, here, is not to get sucked into the right-wing fever swamps, but to isolate legitimate questions, and try and report out the answers.

Columbia Journalism Review, April 15, 2020

The second point is obviously harder to fight back, considering it doesn’t stake a new position as much as reinforces one that certain groups of people have harboured for many years now. It’s one star aligning out of many, so its falling out of place won’t change believers’ minds, and because the believers’ minds will be unchanged, it will promptly fall back in place. This said, apart from the numerous other considerations, I’ll say investigations aren’t the preserve of journalists, and one story that was investigated to a greater extent by non-journalists – especially towards a conclusion that you probably wish to be true – has little necessarily to do with journalism.

In addition, the picture is complicated by the fact that when people find that they’re wrong, they almost never admit it – especially if other valuable things, like their academic or political careers, are tied up with their reputation. On occasion, some turn to increasingly more technical arguments, or close ranks and advertise a false ‘scientific consensus’ (insofar as such consensus can exist as the result of any exercise less laborious than the one vis-à-vis anthropogenic global warming), or both. ‘Isolating the legitimate questions’ here apart – from both sides, mind you – needs painstaking work that only journalists can and will do.

Featured image credit: Ethan Medrano/Pexels.

The Wire Science is hiring

Location: Bengaluru or New Delhi

The Wire Science is looking for a sub-editor to conceptualise, edit and produce high-quality news articles and features in a digital newsroom.

Requirements

  • Good faculty with the English language
  • Excellent copy-editing skills
  • A strong news sense
  • A strong interest in new scientific findings
  • Know how to read scientific papers
  • Familiarity with concepts related to the scientific method and scientific publishing
  • Familiarity with popular social media platforms and their features
  • Familiarity with the WordPress content management system (CMS)
  • Ability to handle data (obtaining data, sorting and cleaning datasets, using tools like Flourish to visualise)
  • Strong reasoning skills
  • 1-3 years’ work experience
  • Optional: have a background in science or engineering

Responsibilities

  • Edit articles according to The Wire Science‘s requirements, within tight deadlines
  • Make editorial decisions in reasonable time and communicate them constructively
  • Liaise with our reporters and freelancers, and work together to produce stories
  • Work with The Wire Science‘s editor to develop ideas for stories
  • Compose short news stories
  • Work on multimedia rendering of published stories (i.e. convert text stories to audio/video stories)
  • Work with the tech and audience engagement teams to help produce and implement features

Salary will be competitive.

Dalit, Adivasi, OBC and minority candidates are encouraged to apply.

If you’re interested, please write to Vasudevan Mukunth at science@thewire.in. Mention you’re applying for The Wire Science sub-editor position in the subject line of your email. In addition to attaching your resumé or CV, please include a short cover letter in the email’s body describing why you think you should be considered.

If your application is shortlisted, we will contact you for a written test followed by an interview.

The passive is political

If Saruman is the stupid shit people say, I have often found Grima Wormtongue is the use of the passive voice. To the uninitiated: Wormtongue was a slimy fellow on Saruman’s side in The Lord of the Rings: The Two Towers. He was much, much less powerful compared to Saruman, but fed the wizard’s ego, lubricated the passage of his dubious ideas into action, and slipped poison into the ears and minds of those who would listen to him.

The passive is useful to attribute to others something you would rather not be the originator of yourself, but which you would like to be true. Or to invoke facts without also invoking the dubious credentials of the person or circumstance that birthed it. Or to dress up your ignorance in the ‘clinical-speak’ that the scientific literature prizes. Or to admit fewer avenues of disagreement. Or, in its most insidious form, to suggest that the message matters a lot more than the context.

Yes, sometimes the passive voice is warranted – often, in my experience, when the point is to maintain sharp focus on a particular idea, concept, etc. in a larger article. This condition is important: the writer or speaker needs to justify the use of the passive voice, in keeping with the deviation from normal that it is.

Of course, you could contend that the creator’s message is the creator’s own, and that they do get to craft it the way they wish. I would contend in return that this is absolutely true – but the question of passive v. active voice arises more pronouncedly in the matter of how the creator’s audience is directed to perceive that message. That is, the creator can use whatever voice they wish, but using one over the other (obviously) changes the meaning and, more importantly, the context they wish the reader to assume.

For example, writing “The ball was thrown” is both a statement that the ball was thrown and an indication to the reader that the identity of the thrower is not relevant.

And because of the specific ways in which the passive voice is bad, the creator effectively puts themselves in a position where the audience could accuse them of deliberately eliding important information. In fact, the creator would open themselves up to this line of inquiry, if not interrogation, even if the line is a dead-end or if the creator actually doesn’t deserve to be accused.

Even more specifically, the use of the passive voice is a loaded affair. I have encountered only a very small number of people writing in the mainstream press who actively shun the passive voice, in favour of the active, or at least have good reasons to adopt the passive. Most writers frequently adopt the passive – and passively so – without acknowledging that this voice can render the text in political shades even if the writer didn’t intend it.

I encountered an opinion of remarkable asininity a few minutes ago, which prompted this little note, and which also serves to illustrate my message.

“One aspect that needs to be considered,” “it is sometimes said,” “remain deprived of sex,” “it is believed that in June alone”. In a conversation with The Soufflé some two years ago, about why middle-aged and older men – those not of our generation, so to speak – harbour so many foolish ideas, he said one reason has to be that when these men sit in their living rooms and enter into lengthy monologues about what they believe, no one challenges them.

Of course, in an overwhelmingly patriarchal society, older men will only brook fewer challenges to their authority (or none at all). I think the passive voice is a syntactic choice that together with the fondness for it removes yet another challenge – one unique to the beautiful act of writing – that a creator may encounter during the act of creation, or at least which facilitates a way to create something that otherwise may not have survived the very act of creation.

In Katju’s case, for example, the second third instances of the passive voice could have given him pause. “It is sometimes said” in the active becomes “X has said” or “X says”, subsequently leading to the question of who ‘X’ is and whether their claim is still right, relevant and/or good.

As I mentioned earlier, the passive voice serves among other reasons to preclude the points or counts on which a reader may raise objections. However, writing – one way or another – is an act of decentralising or at least sharing power, the power inherent in the creator’s knowledge that is now available to others as well, more so in the internet age. Fundamentally, to write is to open the gates through which flow the opportunities for your readers to make decisions based on different bits and kinds of information. And in this exercise, to bar some of these gates can only be self-defeating.

Scientists drafting technical manuscripts – the documents I encounter most often that are brimming with the passive voice – may see less value in writing “X designed the experiment to do Y” than “the experiment was designed to go Y”. But I can think of no reason writing in the active would diminish the manuscript’s credentials, even if it may not serve to improve them either – at least not 99% of the time. I do think that 1% of the time, using the active voice by way of habit could help improve the way we do science, for example by allowing other researchers conducting meta-analyses to understand the role of human actions in the performance of an experiment or, perhaps, to discern the gender, age or qualification of those researchers most often involved in designing experiments v. performing them.

Then again, science is a decidedly, and unfortunately, asocial affair, and the ‘amount’ of behavioural change required to have scientists regularly privilege the active over the passive is high.

This shouldn’t be the case vis-à-vis writers writing for the mainstream press – a domain in which the social matters just as much as the scientific, but often much more. Here, to recall the famous words of Marshall McLuhan, the actor is often the act (perhaps simply reflecting our times – in which to be a passive bystander to acts of violence is to condone the violence itself).

And when Markandey Katju, no less than a former judge of the Supreme Court of India, invokes claims while suppressing their provenance, it quickly becomes a political choice. It is as if (I think) he is thinking, “I don’t care if this is true or not; I must find a way to make this point so that I can then go on to link rapes to unemployment, especially the unemployment brought on by the BJP’s decisions.”

I concede that the act of writing presents a weak challenge – but it is a challenge nonetheless, and which you can strengthen through habituation.

Why scientists should read more

The amount of communicative effort to describe the fact of a ball being thrown is vanishingly low. It’s as simple as saying, “X threw the ball.” It takes a bit more effort to describe how an internal combustion engine works – especially if you’re writing for readers who have no idea how thermodynamics works. However, if you spend enough time, you can still completely describe it without compromising on any details.

Things start to get more difficult when you try to explain, for example, how webpages are loaded in your browser: because the technology is more complicated and you often need to talk about electric signals and logical computations – entities that you can’t directly see. You really start to max out when you try to describe everything that goes into launching a probe from Earth and landing it on a comet because, among other reasons, it brings together advanced ideas in a large number of fields.

At this point, you feel ambitious and you turn your attention to quantum technologies – only to realise you’ve crossed a threshold into a completely different realm of communication, a realm in which you need to pick between telling the whole story and risk being (wildly) misunderstood OR swallowing some details and making sure you’re entirely understood.

Last year, a friend and I spent dozens of hours writing a 1,800-word article explaining the Aharonov-Bohm quantum interference effect. We struggled so much because understanding this effect – in which electrons are affected by electromagnetic fields that aren’t there – required us to understand the wave-function, a purely mathematical object that describes real-world phenomena, like the behaviour of some subatomic particles, and mathematical-physical processes like non-Abelian transformations. Thankfully my friend was a physicist, a string theorist for added measure; but while this meant that I could understand what was going on, we spent a considerable amount of time negotiating the right combination of metaphors to communicate what we wanted to communicate.

However, I’m even more grateful in hindsight that my friend was a physicist who understood the need to not exhaustively include details. This need manifests in two important ways. The first is the simpler, grammatical way, in which we construct increasingly involved meanings using a combination of subjects, objects, referrers, referents, verbs, adverbs, prepositions, gerunds, etc. The second way is more specific to science communication: in which the communicator actively selects a level of preexisting knowledge on the reader’s part – say, high-school education at an English-medium institution – and simplifies the slightly more complicated stuff while using approximations, metaphors and allusions to reach for the mind-boggling.

Think of it like building an F1 racecar. It’s kinda difficult if you already have the engine, some components to transfer kinetic energy through the car and a can of petrol. It’s just ridiculous if you need to start with mining iron ore, extracting oil and preparing a business case to conduct televisable racing sports. In the second case, you’re better off describing what you’re trying to do to the caveman next to you using science fiction, maybe poetry. The problem is that to really help an undergraduate student of mechanical engineering make sense of, say, the Casimir effect, I’d rather say:

According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together.

‘Quantum Atmospheres’ May Reveal Secrets of Matter, Quanta, September 2018

I wouldn’t say the following even though it’s much less wrong:

The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second-quantised electromagnetic field. Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.

Casimir effect, Wikipedia

Put differently, the purpose of communication is to be understood – not learnt. And as I’m learning these days, while helping virologists compose articles on the novel coronavirus and convincing physicists that comparing the Higgs field to molasses isn’t wrong, this difference isn’t common knowledge at all. More importantly, I’m starting to think that my physicist-friend who really got this difference did so because he reads a lot. He’s a veritable devourer of texts. So he knows it’s okay – and crucially why it’s okay – to skip some details.

I’m half-enraged when really smart scientists just don’t get this, and accuse editors (like me) of trying instead to misrepresent their work. (A group that’s slightly less frustrating consists of authors who list their arguments in one paragraph after another, without any thought for the article’s structure and – more broadly – recognising the importance of telling a story. Even if you’re reviewing a book or critiquing a play, it’s important to tell a story about the thing you’re writing about, and not simply enumerate your points.)

To them – which is all of them because those who think they know the difference but really don’t aren’t going to acknowledge the need to bridge the difference, and those who really know the difference are going to continue reading anyway – I say: I acknowledge that imploring people to communicate science more without reading more is fallacious, so read more, especially novels and creative non-fiction, and stories that don’t just tell stories but show you how we make and remember meaning, how we memorialise human agency, how memory works (or doesn’t), and where knowledge ends and wisdom begins.

There’s a similar problem I’ve faced when working with people for whom English isn’t the first language. Recently, a person used to reading and composing articles in the passive voice was livid after I’d changed numerous sentences in the article they’d submitted to the active voice. They really didn’t know why writing, and reading, in the active voice is better because they hadn’t ever had to use English for anything other than writing and reading scientific papers, where the passive voice is par for the course.

I had a bigger falling out with another author because I hadn’t been able to perfectly understand the point they were trying to make, in sentences of broken English, and used what I could infer to patch them up – except I was told I’d got most of them wrong. And they couldn’t implement my suggestions either because they couldn’t understand my broken Hindi.

These are people that I can’t ask to read more. The Wire and The Wire Science publish in English but, despite my (admittedly inflated) view of how good these publications are, I’ve no reason to expect anyone to learn a new language because they wish to communicate their ideas to a large audience. That’s a bigger beast of a problem, with tentacles snaking through colonialism, linguistic chauvinism, regional identities, even ideologies (like mine – to make no attempts to act on instructions, requests, etc. issued in Hindi even if I understand the statement). But at the same time there’s often too much lost in translation – so much so that (speaking from my experience in the last five years) 50% of all submissions written by authors for whom English isn’t the first language don’t go on to get published, even if it was possible for either party to glimpse during the editing process that they had a fascinating idea on their hands.

And to me, this is quite disappointing because one of my goals is to publish a more diverse group of writers, especially from parts of the country underrepresented thus far in the national media landscape. Then again, I acknowledge that this status quo axiomatically charges us to ensure there are independent media outlets with science sections and publishing in as many languages as we need. A monumental task as things currently stand, yes, but nonetheless, we remain charged.

Caste, and science’s notability threshold

A webinar by The Life of Science on the construct of the ‘scientific genius’ just concluded, with Gita Chadha and Shalini Mahadev, a PhD scholar at HCU, as panellists. It was an hour long and I learnt a lot in this short time, which shouldn’t be surprising because, more broadly, we often don’t stop to question the conduct of science itself, how it’s done, who does it, their privileges and expectations, etc., and limit ourselves to the outcomes of scientific practice alone. The Life of Science is one of my favourite publications for making questions like these part of its core work (and a tiny bit also because it’s run by two good friends).

I imagine the organisers will upload a recording of the conversation at some point (edit: hopefully by Monday, says Nandita Jayaraj); they’ve also offered to collect the answers to many questions that went unanswered, only for lack of time, and publish them as an article. This was a generous offer and I’m quite looking forward to that.

I did have yet another question but I decided against asking it when, towards the end of the session, the organisers made some attempts to get me to answer a question about the media’s role in constructing the scientific genius, and I decided I’d work my question into what I could say. However, Nandita Jayaraj, one of The Life of Science‘s founders, ended up answering it to save time – and did so better than I could have. This being the case, I figured I’d blog my response.

The question itself that I’d planned to ask was this, addressed to Gita Chadha: “I’m confused why many Indians think so much of the Nobel Prizes. Do you think the Nobel Prizes in particular have affected the perception of ‘genius’?”

This query should be familiar to any journalist who, come October, is required to cover the Nobel Prize announcements for that year. When I started off at The Hindu in 2012, I’d cover these announcements with glee; I also remember The Hindu would carry the notes of the laureates’ accomplishments, published by the Nobel Foundation, in full on its famous science and tech. page the following day. At first I thought – and was told by some other journalists as well – that these prizes have the audience’s attention, so the announcements are in effect a chance to discuss science with the privilege of an interested audience, which is admittedly quite unusual in India.

However, today, it’s clear to me that the Nobel Prizes are deeply flawed in more ways than one, and if journalists are using them as an opportunity to discuss science – it’s really not worth it. There are many other ways to cover science than on the back of a set of prizes that simply augments – instead of in any way compensating for – a non-ideal scientific enterprise. So when we celebrate the Nobel Prizes, we simply valorise the enterprise and its many structural deformities, not the least of which – in the Indian context – is the fact that it’s dominated by upper-caste men, mostly Brahmins, and riddled with hurdles for scholars from marginalised groups.

Brahmins are so good at science not because they’re particularly gifted but because they’re the only ones who seem to have the opportunity – a fact that Shalini elucidated very clearly when she recounted her experiences as a Dalit woman in science, especially when she said: “My genius is not going to be tested. The sciences have written me off.” The Brahmins’ domination of the scientific workforce has a cascading set of effects that we then render normal simply because we can’t conceive of a different way science can be, including sparing the Brahmin genius of scrutiny, as is the privilege of all geniuses.

(At a seminar last year, some speakers on stage had just discussed the historical roots of India being so bad at experimental physics and had taken a break. Then, I overheard an audience member tell his friend that while it’s well and good to debate what we can and can’t pin on Jawaharlal Nehru, it’s amusing that Brahmin experts will have discussions about Brahmin physicists without either party considering if it isn’t their caste sensibility that prevents them from getting their hands dirty!)

The other way the Nobel Prizes are a bad for journalists indicts the norms of journalism itself. As I recently described vis-à-vis ‘journalistic entropy’, there is a sort of default expectation of reporters from the editorial side to cover the Nobel Prize announcements for their implicit newsworthiness instead of thinking about whether they should matter. I find such arguments about chronicling events without participating in them to be bullshit, especially when as a Brahmin I’m already part of Indian journalism’s caste problem.

Instead, I prefer to ask these questions, and answer them honestly in terms of the editorial policies I have the privilege to influence, so that I and others don’t end up advancing the injustices that the Nobel Prizes stand for. This is quite akin to my, and others’, older argument that journalists shouldn’t blindly offer their enterprise up as a platform for majoritarian politicians to hijack and use as their bullshit megaphones. But if journalists don’t recast their role in society accordingly, they – we – will simply continue to celebrate the Nobel laureates, and by proxy the social and political conditions that allowed the laureates in particular to succeed instead of others, and which ultimately feed into the Nobel Prizes’ arbitrarily defined ‘prestige’.

Note that the Nobel Prizes here are the perfect examples, but only examples nonetheless, to illustrate a wider point about the relationship between scientific eminence and journalistic notability. The Wire for example has a notability threshold: we’re a national news site, which means we don’t cover local events and we need to ensure what we do cover is of national relevance. As a corollary, such gatekeeping quietly implies that if we feature the work of a scientist, then that scientist must be a particularly successful one, a nationally relevant one.

And when we keep featuring and quoting upper-caste male scientists, we further the impression that only upper-caste male scientists can be good at science. Nothing says more about the extent to which the mainstream media has allowed this phenomenon to dominate our lives than the fact of The Life of Science‘s existence.

It would be foolish to think that journalistic notability and scientific eminence aren’t linked; as Gita Chadha clarified at the outset, one part of the ‘genius’ construct in Western modernity is the inevitability of eminence. So journalists need to work harder to identify and feature other scientists by redefining their notability thresholds – even as scientists and science administrators need to rejig their sense of the origins and influence of eminence in science’s practice. That Shalini thinks her genius “won’t be tested” is a brutal clarification of the shape and form of the problem.

Clarity and soundness

I feel a lot of non-science editors just switch off when they read science stuff.

A friend told me this earlier today, during yet another conversation about how many of the editorial issues that assail science and health journalism have become more pronounced during the pandemic (by dint of the pandemic being a science and health ‘event’). Even earlier, editors would switch off whenever they’d read science news, but then the news would usually be about a new study discussing something coffee could or couldn’t do to the heart.

While that’s worrying, the news was seldom immediately harmful, and lethal even more rarely. In a pandemic, on the other hand, bullshit that makes it to print hurts in two distinct ways: by making things harder for good health journalists to get through to readers with the right information and emphases, and of course by encouraging readers to do things that might harm them.

But does this mean editors need to know the ins and outs of the subject on which they’re publishing articles? This might seem like a silly question to ask but it’s often the reality in small newsrooms in India, where one editor is typically in charge of three or four beats at a time. And setting aside the argument that this arrangement is a product of complacency and not taking science news seriously more than resource constraints, it’s not necessarily a bad thing either.

For example, a political editor may not be able to publish incisive articles on, say, developments in the art world, but they could still help by identifying reliable news sources and tap their network to commission the right reporters. And if the organisation spends a lot more time covering political news, and with more depth, this arrangement is arguably preferable from a business standpoint.

Of course, such a setup is bound to be error-prone, but my contention is that it doesn’t deserve to be written off either, especially this year – when more than a few news publishers suddenly found themselves in the middle of a pandemic even as they couldn’t hire a health editor because their revenues were on the decline.

For their part, then, publishers can help minimise errors by being clear about what editors are expected to do. For example, a newsroom can’t possibly do a great job of covering science developments in the country without a science editor; axiomatically, non-science editors can only be expected to do a superficial job of standing in for a science editor.

This said, the question still stands: What are editors to do specifically, especially those suddenly faced with the need to cover a topic they’re only superficially familiar with? The answer to this question is important not just to help editors but also to maintain accountability. For example, though I’ve seldom covered health stories in the past, I also don’t get to throw my hands up as The Wire‘s science, health and environment editor when I publish a faulty story about, say, COVID-19. It is a bit of a ‘damned if you do, damned if you don’t’ situation, but it’s not entirely unfair either: it’s the pandemic, and The Wire can’t not cover it!

In these circumstances, I’ve found one particular way to mitigate the risk of damnation, so to speak, quite effective. I recently edited an article in which the language of a paragraph seemed off to me because it wasn’t clear what the author was trying to say, and I kept pushing him to clarify. Finally, after 14 emails, we realised he had made a mistake in the calculations, and we dropped that part of the article. More broadly, I’ve found that nine times out of ten, even pushbacks on editorial grounds can help identify and resolve technical issues. If I think the underlying argument has not been explained clearly enough, I send a submission back even if it is scientifically accurate or whatever.

Now, I’m not sure how robust this relationship is in the larger scheme of things. For example, this ‘mechanism’ will obviously fail when clarity of articulation and soundness of argument are not related, such as in the case of authors for whom English is a second language. For another, the omnipresent – and omnipotent – confounding factor known as unknown unknowns could keep me from understanding an argument even when it is well-made, thus putting me at risk of turning down good articles simply because I’m too dense or ignorant.

But to be honest, these risks are quite affordable when the choice is between damnation for an article I can explain and damnation for an article I can’t. I can (and do) improve the filter’s specificity/sensitivity 😄 by reading widely myself, to become less ignorant, and by asking authors to include a brief of 100-150 words in their emails clarifying, among other things, their article’s intended effect on the reader. And fortuitously, when authors are pushed to be clearer about the point they’re making, it seems they also tend to reflect on the parts of their reasoning that lie beyond the language itself.

The virus and the government

In December 2014, public health researchers and activists gathered at a public forum in Cambridge, Massachusetts, to discuss how our perception of diseases and their causative pathogens influences our ideas of what we can and can’t do to fight them. According to a report published in The Harvard Gazette:

The forum prompted serious reflection about structural inequalities and how public perceptions get shaped, which often leads to how resources are directed. “The cost of believing that something is so lethal and fatal is significant,” [Paul] Farmer said.

[Evelynn] Hammonds drew attention to how perceptions of risk about Ebola had been shaped mostly through the media, while noting that epidemics “pull the covers off” the ways that the poor, vulnerable, and sick are perceived.

These statements highlight the importance of a free press with a spine during a pandemic – instead of one that bends to the state’s will as well as doesn’t respect the demands of good health journalism while purporting to practice it.

We’ve been seeing how pliant journalists, especially on news channels like India Today and Republic and in the newsrooms of digital outlets like Swarajya and OpIndia, try so hard so often to defend the government’s claims about doing a good job of controlling the COVID-19 epidemic in India. As a result, they’ve frequently participated – willingly or otherwise – in creating the impression that a) the virus is deadly, and b) all Muslims are deadly.

Neither of course is true. But while political journalists, who in India have generally been quite influential, have helped disabuse people of the latter notion, the former has attracted fewer rebuttals principally because the few good health journalists and the vocal scientists operating in the country are already overworked thanks to the government’s decoy acts on other fronts.

As things stand, beware anyone who says the novel coronavirus is deadly if only because a) all signs indicate that it’s far less damaging to human society than tuberculosis is every year, and b) it’s an awfully powerful excuse that allows the government to give up and simply blame the virus for a devastation that – oddly enough – seems to affect the poor, the disabled and the marginalised too far more than the law of large numbers can account for.