On AWSAR, Saransh, etc.

The Indian National Young Academy of Sciences has announced a “thesis competition for PhD students” called ‘Saranash’. A PhD student will have three minutes, and three slides, to describe their work via video, and winners stand to receive Rs 10,000, Rs 6,000 and Rs 4,000 in cash, for the first three places. It’s a good opportunity, quite like the Department of Science and Technology’s ‘Augmenting Writing Skills for Articulating Research’ (AWSAR) programme, in which PhDs and postdocs in any science stream are invited to share short articles based on the following criteria (among others):

Entries would be invited from research scholars and PDFs who wish to publish their research in way that would interest non-scientific audiences. The story should focus on the answering the queries such as why does my research matter? Why is it important? Why does it interest researchers? Why should it interest the reader? objectively. The article must be based on the research undertaken by the individual researcher.

My question is: why do both AWSAR and Saransh ask students to communicate their own work? Is this a conscious decision on the part of the governing bodies or is it the opposite – a lack of application of mind? I think the difference matters because it’s no secret that effective communication of any form, and on any level, is nascent at best in this part of the world. This is why initiatives like AWSAR and Saransh exist in the first place. This said, if the decision to have participants write about their own work is an attempt to foster communication by eliminating one group of variables, of deciding which other work to pick and then assimilating it, that’s great – if it is going to be followed up and nurtured in some way.

For example, what happens to a participant after they win an AWSAR award, and what happens to their work? I think it lies idle, and will probably wind its way to an archive or compilation that a few people will visit/read; and the participant will presumably continue with their science work. (I raised this issue at the meeting with the Principal Scientific Advisor in January 2020; his colleagues made a note of it, but then COVID-19 happened and I don’t have my hopes of change up.) The AWSAR website also says “all awardees will be given an opportunity to attend Science Film Training Workshop organised by Vigyan Prasar”.

As such, it seems, AWSAR assumes that those who are interested enough to participate will also continue to communicate their work at regular intervals, and work to improve themselves. This is clearly far-fetched. The ramp should be longer and reach higher, leading up to a point where effective communication becomes second nature. And if the first step is to present one’s own work, the logical next is to present someone else’s work; ultimately, useful communication will require one to do both. And both AWSAR and Saransh, by virtue of being initiatives that already recognise the value of communicating science to an audience of non-experts, are well-placed to make this happen. At the least, they need to find some way to emphasise that communication is an endless process.

(One simple solution came to mind – to require winning students to use their prize-money on communication-related efforts, such as to start a blog or produce a multimedia story for publication in the press. This is related to another idea tossed around at the January 2020 meeting, that the Principal Scientific Advisor’s office help set up a network of journalistic editors with whom scientific communicators could consult. But where money from the government is concerned, the first thing that comes to mind is its failure to pay science students’ fellowship amounts on time – often being delayed by many months, even during the COVID-19 epidemic – so to be fair there ought to be no say in how students choose to spend their money.)

But if I’ve assumed wrong, and both competitions focus on communicating one’s work because they don’t see the difference between that and communicating something they haven’t spent a few years studying – leading all the way up to an absolute ignorance of issues like conflicts of interest (too many scientists take offence when I tell them this is why I’m turning their article, on their own research paper, down) – then AWSAR, Saransh, etc. could easily become gateways to a ‘corrupt’ form of communication that is synonymous with serving one’s own interests.

A similar symptom of these programmes’ organisers not having thought things through is that the eligibility criteria make no mention of how participants can and can’t communicate their work. The AWSAR and Saransh web-pages are special in the sense that they will be visited predominantly by people who aren’t yet prolific communicators but are interested in the art. As such, including, say, a suggestion that participants should not treat their audience as one big empty vessel, or an opportunity to engage in discussions with audience-members (instead of restricting that to Qs and As or, in Saransh’s case, queries from jury members), could ensure in a significant way that many people’s future efforts evolve from the right substrate of principles.

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

Scicommers as knowledge producers

Reading the latest edition of Raghavendra Gadagkar’s column in The Wire Science, ‘More Fun Than Fun’, about how scientists should become communicators and communicators should be treated as knowledge-producers, I began wondering if the knowledge produced by the latter is in fact not the same knowledge but something entirely new. The idea that communicators simply make the scientists’ Promethean fire more palatable to a wider audience has led, among other things, to a belief widespread among scientists that science communicators are adjacent to science and aren’t part of the enterprise producing ‘scientific knowledge’ itself. And this perceived adjacency often belittles communicators by trivialising the work that they do.

Explanatory writing that “enters into the mental world of uninitiated readers and helps them understand complex scientific concepts”, to use Gadagkar’s words, takes copious and focused work. (And if it doesn’t result in papers, citations and h-indices, just as well: no one should become trapped in bibliometrics the way so many scientists have.) In fact, describing the work of communicators in this way dismisses a specific kind of proof of work that is present in the final product – in much the same way scientists’ proofs of work are implicit in new solutions to old problems, development of new technologies, etc. The knowledge that people writing about science for a wider audience produce is, in my view, entirely distinct, even if the nature of the task at hand is explanatory.

In his article, Gadagkar writes:

Science writers should do more than just reporting, more than translating the gibberish of scientists into English or whatever language they may choose to write in. … Science writers are in a much better position to make lateral comparisons, understand the process of science, and detect possible biases and conflicts of interest, something that scientists, being insiders, cannot do very well. So rather than just expect them to clean up our messy prose, we should elevate science writers to the role of knowledge producers.

My point is about knowledge arising from a more limited enterprise – i.e. explanation – but which I think can be generalised to all of journalism as well (and to other expository enterprises). And in making this point, I hope my two-pronged deviation from Gadagkar’s view is clear. First, science journalists should be treated as knowledge producers, but not in the limited confines of the scientific enterprise and certainly not just to expose biases; instead, communicators as knowledge producers exist in a wider arena – that of society, including its messy traditions and politics, itself. Here, knowledge is composed of much more than scientific facts. Second, science journalists are already knowledge producers, even when they’re ‘just’ “translating the gibberish of scientists”.

Specifically, the knowledge that science journalists produce differs from the knowledge that scientists produce in at least two ways: it is accessible and it makes knowledge socially relevant. What scientists find is not what people know. Society broadly synthesises knowledge from information that it weights together with extra-scientific considerations, including biases like “which university is the scientist affiliated with” and concerns like “will the finding affect my quality of life”. Journalists are influential synthesisers who work with or around these and other psychosocial stressors to contextualise scientific findings, and thus science itself. Even when they write drab stories about obscure phenomena, they make an important choice: “this is what the reader gets to read, instead of something else”.

These properties taken together encompass the journalist’s proof of work, which is knowledge accessible to a much larger audience. The scientific enterprise is not designed to produce this particular knowledge. Scientists may find that “leaves use chlorophyll to photosynthesise sunlight”; a skilled communicator will find that more people know this, know why it matters and know how they can put such knowledge to use, thus fostering a more empowered society. And the latter is entirely new knowledge – akin to an emergent object that is greater than the sum of its scientific bits.

Anti-softening science for the state

The group of ministers (GoM) report on “government communication” has recommended that the government promote “soft topics” in the media like “yoga” and “tigers”. We can only speculate what this means, and that shouldn’t be hard. The overall spirit of the document is insecurity and paranoia, manifested as fantasies of reining in the country’s independent media into doing the government’s bidding. The promotion of “soft” stories is in line with this aspiration – “soft” here can only mean stories that don’t criticise the government, its actions or policies, and be like ‘harmless entertainment’ for a politically inert audience. It’s also no coincidence that the two examples on offer of such stories skirt the edges of health and environmental journalism; other examples are sure to include reports of scientific discoveries.

Science is closely related to the Indian state in many ways. The current government in particular, in power since 2014, has been promoting application-oriented R&D (a bias especially visible in budgetary allocations); encouraging ill-prepared research facilities to self-finance; privileging certain private interests (esp. the Reliance and Adani groups) vis-à-vis natural resources like coal, coastal zones and spectrum allocations; pillaging India’s ecological commons for industrialisation; promoting pseudoscience (which further disempowers those closer to society’s margins); interfering at universities by appointing vice-chancellors friendly to the ruling party (and if that doesn’t work, jailing students on ridiculous charges that include dissent); curtailing academic freedom; and hounding after scientists and institutions that threaten its preferred narratives.

With this in mind, it’s important for science journalism outlets and science journalists to not become complicit – inadvertently or otherwise – in the state project to “soften” science, and start reporting, if they aren’t already, on issues with a closer eye on their repercussions on the wider society. The idea that science journalism can or should be objective the way science is is nonsensical because the idea that science is an objective enterprise is nonsensical. The scientific method is a technique to obtain information about the natural universe while steadily subtracting the influence of human biases and other limitations. However, what scientists choose to study, how they design their studies and what is ultimately construed to be knowledge are all deeply human enterprises.

On top of this, science journalism is driven by journalists’ sense of good and bad: We write favourably about the former and argue against the latter. We write about some telescope unravelling a long-standing cosmogonic problem and also publish an article calling out homeopathy’s bullshit. We write a scientific paper that uses ingenious methods to prove its point and also call out Indian academia as an unsafe space for queer-trans people.

Some have advanced a defence that simply focusing on “good science” can inculcate in the audience a sense of what is “worthy” and “desirable” while denying “bad science” the platform and publicity it seeks. This is objectionable on two counts.

First, who decides what is “worthy”? For example, some scientists, especially in the ‘senior’ cadre and the more influential and/or powerful for it, make this choice by deferring to the wisdom of scientific journals, chosen according to their impact factors, and what the journals have deemed worthy of publishing. But abiding by this heuristic only means we continue to participate in and extend the lifetime of the existing ways of knowledge production that privilege white scientists, male scientists and richer scientists – and sensational positive results on topics that the scientists staffing the journals’ editorial boards would like to focus on.

Second, being limited to goodness at a time when badness abounds is bad, at least severely tone-deaf (but I’m disinclined to be so charitable). Very broadly, that science is inherently amoral is a pithy factoid by this point. There have been far too many incidents in history for anyone to still be able to overlook, in good faith, the fact that science’s prescriptions unguided by human morals and values are quite likely to lead to humanitarian disasters. We may even be living through one such. Scientists’ rapid and successful development of new vaccines against a new pathogen was followed by a global rush to acquire enough doses. But the world’s industrial and economic powers have ensured that the strongest among them have enough to vaccine their entire populations more than once, have blocked petitions at global fora to loosen patents on these vaccines to expand manufacturing and distribution, have forced desperate countries to purchase doses at prices higher than those for developed blocs like the EU, and have allowed corporate behemoths to make monumental profits even as they force third-world nations to pledge sovereign assets to secure supplies. It’s fallacious to claim scientific labour makes the world a better place when the fruits of such labour must still be filtered, like so much else, through the capitalist sieve.

There are many questions for the science journalist to consider here: why have some communities in certain countries been affected more than others? Why is there so little data on the vaccines’ consequences for pregnant women? Do we know enough to discuss the pandemic’s effects on women? Why, at a time when so many scientists and engineers were working to design new ventilators, was there no unified standard to ensure usability? If the world has demonstrated that it’s possible to design, test, manufacture and administer vaccines against a new virus in such a short time, why have we been waiting so long for effective defences against neglected tropical diseases? How do the racial, gender and ethnic identifies of clinical trials affect trial outcomes? Is it ethical for countries that hosted vaccine clinical trials to get the first doses? Should we compulsorily prohibit patents on drugs, therapies and devices important to ending pandemics? If so, what might the consequences be for drug development? And what good is a vaccine if we can’t also ensure all the world’s 7.x billion people can be vaccinated simultaneously?

The pandemic isn’t a particularly ‘easy’ example either. For example, if the government promises to develop new supercomputers, who can use them and what problems will they be used to solve? How can we improve the quality and quantity of research conducted at institutes funded by state governments? Why do so many scientists at public universities plagiarise scientific papers? On what basis are the winners of the S.S. Bhatnagar Award chosen? Should we formally do away with subscription-funded scientific journals in favour of open-access publishing, overlay journals and post-publication peer-review? Is methane really a “clean fuel” even though its extraction and transportation will impose a considerable dirty cost? Why can’t we have more GM foods in the market even though the science is ‘good’? Is it worthwhile to invest Rs 10,000 crore in a human spaceflight programme that lacks long-term vision? And so forth.

Simply focusing on “good science” at our present time is not enough. I also reject the argument that it’s not for science journalists to protect or defend science simply because science, whatever it’s interpreted to mean, is not the preserve of scientists. As an enterprise rooted in its famous method, science is a tool of empowerment: it encourages discovery and deliberation; I’m not sure if it’s fair to say it encourages dissent as well but there is evidence that science can accommodate it without resorting to violence and subjugation.

It’s not for nothing that I’m more comfortable holding up an aspirin tablet for someone with a headache than a jar of leaves from the Patanjali Ayurved stable: being able to know how and why something works is power in the same way knowing how the pharmaceutical industry manipulates markets, how to file an RTI application, what makes an FIR valid or invalid, what the election commission’s model code of conduct stipulates or what kind of land a mall can be built on is power. All of it represents control, especially the ability to say ‘no’ and mean it.

This is ultimately what the GoM report fantasises about – and what the present government desires: the annulment of individual and institutional resistance, one subset of which is the neutralisation of science’s ability to provoke questions about atoms and black holes as much as about the circumstances in which scientists study them, about the nature, utility and purpose of knowledge, and the relationships between science, capital and the state.


Addendum

In January 2020, the Office of the Principal Scientific Adviser (PSA) to the Government of India organised a meeting with science journalists and communicators from around the country to discuss what the two parties could do for each other. Us journalists and communicators aired a lot of grievances during the meeting as well as suggestions on fixing long-standing and/or particularly thorny problems (some notes here).

In light of the government’s renewed attention on curbing press freedom and ludicrous suggestions in the report, such as one by S. Gurumurthy that the news should be a “mixture of truth and untruth”, I’m not sure where that leaves the PSA’s plans for future consultation nor – considering parts of the report seemingly manufactured consent – whether good-faith consultation will be possible going ahead. I can only hope that members of this community at least evoke and keep the faith.

The commentariot

The following post is an orange flag – a quieter alarm raised in anticipation of something worse that hasn’t transpired yet but is likely in the offing. Earlier today, at the end of a call with a scientist for a story, the scientist implied that my job – as science journalist – required nothing of me but to be a commentator, whereas his required him to be a ‘maker’ and that that was superior. At the outset, this is offensive because if you don’t think journalism requires both creative and non-creative work to conduct ethically, you either don’t know what journalism is or you’re taking its moving parts for granted.

But the scientist’s comment merited an orange flag, I thought, because it’s the fourth time I’ve heard something like that in the last three months – and is a point of view I can’t help but think is attached in some way to our present national government and the political climate it has engendered. (All four scientists worked for government-funded institutes but I say this only because of the slant of their own views.)

The Modi government is, among many other things, a cult of personality centred on the prime minister and his fabled habit of getting things done, even if they’re undemocratic or just unconstitutional. Many of the government’s reforms today are often cast as being in stark contrast to the Congress’s rule of the country – that “Modi did what no other prime minister had dared.” The illegitimacy of these boasts aside, the government and its supporters are obviously proud of their ability to act swiftly and have rendered inaction in any form a sin (to the point where this government has also been notorious for repackaging previous governments’ schemes as its own).

They have also consigned many activities as being sinful for the same reason because their practice is much too tempered, or whose outcomes they believe “don’t go far enough”, for their taste. Journalism is one of them. A conversation a few months ago with a person who was both scientist and government official alerted me as to how real this sentiment might be in government circles when they said, “I have real work unlike you and I will get back to you with a concrete answer in two or three days.” The other scientists also said something similar. The right-wing has often cast the mainstream Indian journalism establishment as elite, classist, corrupt and apologist, and the accusation that it doesn’t do any real work – “certainly not to the nation’s benefit” – simply extends this view.

But for scientists to denigrate the work of science journalists, especially since their training should have alerted them to different ways in which science is both good and hard, is more than dispiriting. It’s a sign that “journalists don’t do good work” is more than just an ideological spearpoint used to undermine adversarial journalism, that it is something at least parts of the establishment believe to be true. And it also suggests that the stories we publish are being read as nothing more than the babble of a lazy commentariot.

A Q&A about my job and science journalism

A couple weeks ago, some students from a university in South India got in touch to ask a few questions about my job and about science communication. The correspondence was entirely over email, and I’m pasting it in full below (with permission). I’ve edited a few parts in one of two ways – to make myself clearer or to hide sensitive information – and removed one question because its purpose was clarificatory.

1) What does your role as a science editor look like day to day?

My day as science editor begins at around 7 am. I start off by catching up on the day’s headlines and other news, especially all the major newspapers and social media channels. I also handle a part of The Wire Science‘s social media presence, so I schedule some posts in the first hour.

Then, from 8 am onwards, I begin going through the publishing schedule – which is a document I prepare on the previous evening, listing all the articles that writers are expected to file on that day, as well as what I need to edit/publish and in which position on the homepage. At 9.30 am, my colleagues and I get on a conference call to discuss the day’s top stories and to hear from our reporters on which stories they will be pursuing that day (and any stories we might be chasing ourselves). The call lasts for about an hour.

From 10.30-11 am onwards, I edit articles, reply to emails, commission new articles, discuss potential story ideas with some reporters, scientists and my colleagues, check on the news cycle every now and then, make sure the site is running smoothly, discuss changes or tweaks to be made to the front-end with our tech team, and keep an eye on my finances (how much I’ve commissioned for, who I need to pay, payment deadlines, pending allocations, etc.).

All of this ends at about 4.30 pm. I close my laptop at that point but I continue to have work until 6 pm or so, mostly in the form of emails and maybe some calls. The last thing I do is prepare the publishing schedule for the next day. Then I shut shop.

2) With leading global newspapers restructuring the copy desk, what are the changes the Indian newspapers have made in the copy desk after the internet boom?

I’m not entirely familiar with the most recent changes because I stopped working with a print establishment six years ago. When I was part of the editorial team at The Hindu, the most significant change related to the advent of the internet had less to do with the copy desk per se and more to do with the business model. At least the latter seemed more pressing to me.

But this said, in my view there is a noticeable difference between how one might write for a newspaper and for the web. So a more efficient copy-editing team has to be able to handle both styles, as well as be able to edit copy to optimise for audience engagement and readability both online and offline.

3) Indian publications are infamous for mistakes in the copy. Is this a result of competition for breaking news or a lack of knack for editing?

This is a question I have been asking myself since I started working. I think a part of the answer you’re looking for lies in the first statement of your question. Indian copy-editors are “infamous for mistakes” – but mistakes according to whom?

The English language came to India in different ways, it is not homegrown. British colonists brought English to India, so English took root in India as the language of administration. English is the de facto language worldwide for the conduct of science, so scientists have to learn it. Similarly, there are other ways in which the use of English has been rendered useful and important and necessary. English wasn’t all these things in and of itself, not without its colonial underpinnings.

So today, in India, English is – among other things – the language you learn to be employable, especially with MNCs or such. And because of its historical relationships, English is taught only in certain schools, schools that typically have mostly students from upper-caste/upper-class families. English is also spoken only by certain groups of people who may wish to secret it as a class symbol, etc. I’m speaking very broadly here. My point is that English is reserved typically for people who can afford it, both financially and socio-culturally. Not everyone speaks ‘good’ English (as defined by one particular lexicon or whatever) nor can they be expected to.

So what you may see as mistakes in the copy may just be a product of people not being fluent in English, and composing sentences in ways other than you might as a result. India has a contested relationship with English and that should only be expected at the level of newsrooms as well.

However, if your question had to do with carelessness among copy-editors – I don’t know if that is a very general problem (nor do I know what the issues might be in a newsroom publishing in an Indian language). Yes, in many establishments, the management doesn’t pay as much attention to the quality of writing as it should, perhaps in an effort to cut costs. And in such cases, there is a significant quality cost.

But again, we should ask ourselves as to whom that affects. If a poorly edited article is impossible to read or uses words and ideas carelessly, or twists facts, that is just bad. But if a poorly composed article is able to get its points across without misrepresenting anyone, whom does that affect? No one, in my opinion, so that is okay. (It could also be the case that the person whose work you’re editing sees the way they write as a political act of sorts, and if you think such an issue might be in play, it becomes important to discuss it with them.)

Of course, the matter of getting one’s point across is very subjective, and as a news organisation we must ensure the article is edited to the extent that there can be no confusion whatsoever – and edited that much more carefully if it’s about sensitive issues, like the results of a scientific study. And at the same time we must also stick to a word limit and think about audience engagement.

My job as the editor is to ensure that people are understood, but in order to help them be understood better and better, I must be aware of my own privileges and keep subtracting them from the editorial equation (in my personal case: my proficiency with the English language, which includes many Americanisms and Britishisms). I can’t impose my voice on my writers in the name of helping them. So there is a fine line here that editors need to tread carefully.

4) What are the key points that a science editor should keep in mind while dealing with copy?

Aside from the points I raised in my previous answer, there are some issues that are specific to being a good science editor. I don’t claim to be good (that is for others to say) – but based on what I have seen in the pages of other publications, I would only say that not every editor can be a science editor without some specific training first. This is because there are some things that are specific to science as an enterprise, as a social affair, that are not immediately apparent to people who don’t have a background in science.

For example, the most common issue I see is in the way scientific papers are reported – as if they are the last word on that topic. Many people, including many journalists, seem to think that if a scientific study has found coffee cures cancer, then it must be that coffee cures cancer, period. But every scientific paper is limited by the context in which the experiment was conducted, by the limits of what we already know, etc.

I have heard some people define science as a pursuit of the truth but in reality it’s a sort of opposite – science is a way to subtract uncertainty. Imagine shining a torch within a room as you’re looking for something, except the torch can only find things that you don’t want, so you can throw them away. Then you turn on the lights. Papers are frequently wrong and/or are updated to yield new results. This seldom makes the previous paper directly fraudulent or wrong; it’s just the way science works. And this perspective on science can help you think through what a science editor’s job is as well.

Another thing that’s important to know is that science progresses in incremental fashion and that the more sensational results are either extremely unlikely or simply misunderstood.

If you are keen on plumbing deeper depths, you could also consider questions about where authority comes from and how it is constructed in a narrative, the importance of indeterminate knowledge-states, the pros and cons of scientism, what constitutes scientific knowledge, how scientific publishing works, etc.

A science editor has to know all these things and ensure that in the process of running a newsroom or editing a publication, they don’t misuse, misconstrue or misrepresent scientific work and scientists. And in this process, I think it’s important for a science editor to not be considered to be subservient to the interests of science or scientists. Editors have their own goals, and more broadly speaking science communication in all forms needs to be seen and addressed in its own right – as an entity that doesn’t owe anything to science or scientists, per se.

5) In a country where press freedom is often sacrificed, how does one deal with political pieces, especially when there is proof against a matter concerning the government?

I’m not sure what you mean by “proof against a matter concerning the government.” But in my view, the likelihood of different outcomes depends on the business model. If, for example, you the publisher make a lot of money from a hotshot industrialist and his company, then obviously you are going to tread carefully when handling stories about that person or the company. How you make your money dictates who you are ultimately answerable to. If you make your money by selling newspapers to your readers, or collecting donations from them like The Wire does, you are answerable to your readers.

In this case, if we are handling a story in which the government is implicated in a bad way, we will do our due diligence and publish the story. This ‘due diligence’ is important: you need to be sure you have the requisite proof, that all parts of the story are reliable and verifiable, that you have documentary evidence of your claims, and that you have given the implicated party a chance to defend themselves (e.g. by being quoted in the story).

This said, absolute press freedom is not so simple to achieve. It doesn’t just need brave editors and reporters. It also needs institutions that will protect journalists’ rights and freedoms, and also shield them reliably from harm or malice. If the courts are not likely to uphold a journalist’s rights or if the police refuse proper protection when the threat of physical violence is apparent, blaming journalists for “sacrificing” press freedom is ignorant. There is a risk-benefit analysis worth having here, if only to remember that while the benefit of a free press is immense, the risks shouldn’t be taken lightly.

6) Research papers are lengthy and editors have deadlines. How do you make sure to communicate information with the right context for a wider audience?

Often the quickest way to achieve this is to pick your paper and take it to an independent scientist working in the same field. These independent comments are important for the story. But specific to your question, these scientists – if they have the time and are so inclined – can often also help you understand the paper’s contents properly, and point out potential issues, flaws, caveats, etc. These inputs can help you compose your story faster.

I would also say that if you are an editor looking for an article on a newly published research paper, you would be better off commissioning a reporter who is familiar, to whatever extent, with that topic. Obviously if you assign a business reporter to cover a paper about nanofluidic biosensors, the end result is going to be somewhere between iffy and disastrous. So to make sure the story has got its context right, I would begin by assigning the right reporter and making sure they’ve got comments from independent scientists in their copy.

7) What are some of the major challenges faced by science communicators and reporters in India?

This is a very important question, and I can’t hope to answer it concisely or even completely. In January this year, the office of the Principal Scientific Advisor to the Government of India organised a meeting with a couple dozen science journalists and communicators from around India. I was one of the attendees. Many of the issues we discussed, which would also be answers to your question, are described here.

If, for the purpose of your assignment, you would like me to pick one – I would go with the fact that science journalism, and science communication more broadly, is not widely acknowledged as an enterprise in its own right. As a result, many people don’t see the value in what science journalists do. A second and closely related issue is that scientists often don’t respond on time, even if they respond at all. I’m not sure of the extent to which this is an etiquette issue. But by calling it an etiquette issue, I also don’t want to overlook the possibility that some scientists don’t respond because they don’t think science journalism is important.

I was invited to attend the Young Investigators’ Meeting in Guwahati in March 2019. There, I met a big bunch of young scientists who really didn’t know why science journalism exists or what its purpose is. One of them seemed to think that since scientific papers pass through peer review and are published in journals, science journalists are wasting their time by attempting to discuss the contents of those papers with a general audience. This is an unnecessary barrier to my work – but it persists, so I must constantly work around or over it.

8) What are the consequences if a research paper has been misreported?

The consequence depends on the type and scope of misreporting. If you have consulted an independent scientist in the course of your reporting, you give yourself a good chance of avoiding reporting mistakes.

But of course mistakes do slip through. And with an online publication such as The Wire – if a published article is found to have a mistake, we usually correct the mistake once it has been pointed out to us, along with a clarification at the bottom of the article acknowledging the issue and recording the time at which the change was made. If you write an article that is printed and is later found to have a mistake, the newspaper will typically issue an erratum (a small note correcting a mistake) the next day.

If an article is found to have a really glaring mistake after it is published – and I mean an absolute howler – the article could be taken down or retracted from the newspaper’s record along with an explanation. But this rarely happens.

9) In many ways, copy editing disconnects you from your voice. Does it hamper your creativity as a writer?

It’s hard to find room for one’s voice in a news publication. About nine-tenths of the time, each of us is working on a news copy, in which a voice is neither expected nor can add much value of its own. This said, when there is room to express oneself more, to write in one’s voice, so to speak, copy-editing doesn’t have to remove it entirely.

Working with voices is a tricky thing. When writers pitch or write articles in which their voices are likely to show up, I always ask them beforehand as to what they intend to express. This intention is important because it helps me edit the article accordingly (or decide whether to edit it at all). The writer’s voice is part of this negotiation. Like I said before, my job as the editor is to make sure my writers convey their points clearly and effectively. And if I find that their voice conflicts with the message or vice versa, I will discuss it with them. It’s a very contested process and I don’t know if there is a black-and-white answer to your question.

It’s always possible, of course, if you’re working with a bad editor and they just remodel your work to suit their needs without checking with you. But short of that, it’s a negotiation.

Why scientists should read more

The amount of communicative effort to describe the fact of a ball being thrown is vanishingly low. It’s as simple as saying, “X threw the ball.” It takes a bit more effort to describe how an internal combustion engine works – especially if you’re writing for readers who have no idea how thermodynamics works. However, if you spend enough time, you can still completely describe it without compromising on any details.

Things start to get more difficult when you try to explain, for example, how webpages are loaded in your browser: because the technology is more complicated and you often need to talk about electric signals and logical computations – entities that you can’t directly see. You really start to max out when you try to describe everything that goes into launching a probe from Earth and landing it on a comet because, among other reasons, it brings together advanced ideas in a large number of fields.

At this point, you feel ambitious and you turn your attention to quantum technologies – only to realise you’ve crossed a threshold into a completely different realm of communication, a realm in which you need to pick between telling the whole story and risk being (wildly) misunderstood OR swallowing some details and making sure you’re entirely understood.

Last year, a friend and I spent dozens of hours writing a 1,800-word article explaining the Aharonov-Bohm quantum interference effect. We struggled so much because understanding this effect – in which electrons are affected by electromagnetic fields that aren’t there – required us to understand the wave-function, a purely mathematical object that describes real-world phenomena, like the behaviour of some subatomic particles, and mathematical-physical processes like non-Abelian transformations. Thankfully my friend was a physicist, a string theorist for added measure; but while this meant that I could understand what was going on, we spent a considerable amount of time negotiating the right combination of metaphors to communicate what we wanted to communicate.

However, I’m even more grateful in hindsight that my friend was a physicist who understood the need to not exhaustively include details. This need manifests in two important ways. The first is the simpler, grammatical way, in which we construct increasingly involved meanings using a combination of subjects, objects, referrers, referents, verbs, adverbs, prepositions, gerunds, etc. The second way is more specific to science communication: in which the communicator actively selects a level of preexisting knowledge on the reader’s part – say, high-school education at an English-medium institution – and simplifies the slightly more complicated stuff while using approximations, metaphors and allusions to reach for the mind-boggling.

Think of it like building an F1 racecar. It’s kinda difficult if you already have the engine, some components to transfer kinetic energy through the car and a can of petrol. It’s just ridiculous if you need to start with mining iron ore, extracting oil and preparing a business case to conduct televisable racing sports. In the second case, you’re better off describing what you’re trying to do to the caveman next to you using science fiction, maybe poetry. The problem is that to really help an undergraduate student of mechanical engineering make sense of, say, the Casimir effect, I’d rather say:

According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together.

‘Quantum Atmospheres’ May Reveal Secrets of Matter, Quanta, September 2018

I wouldn’t say the following even though it’s much less wrong:

The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second-quantised electromagnetic field. Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.

Casimir effect, Wikipedia

Put differently, the purpose of communication is to be understood – not learnt. And as I’m learning these days, while helping virologists compose articles on the novel coronavirus and convincing physicists that comparing the Higgs field to molasses isn’t wrong, this difference isn’t common knowledge at all. More importantly, I’m starting to think that my physicist-friend who really got this difference did so because he reads a lot. He’s a veritable devourer of texts. So he knows it’s okay – and crucially why it’s okay – to skip some details.

I’m half-enraged when really smart scientists just don’t get this, and accuse editors (like me) of trying instead to misrepresent their work. (A group that’s slightly less frustrating consists of authors who list their arguments in one paragraph after another, without any thought for the article’s structure and – more broadly – recognising the importance of telling a story. Even if you’re reviewing a book or critiquing a play, it’s important to tell a story about the thing you’re writing about, and not simply enumerate your points.)

To them – which is all of them because those who think they know the difference but really don’t aren’t going to acknowledge the need to bridge the difference, and those who really know the difference are going to continue reading anyway – I say: I acknowledge that imploring people to communicate science more without reading more is fallacious, so read more, especially novels and creative non-fiction, and stories that don’t just tell stories but show you how we make and remember meaning, how we memorialise human agency, how memory works (or doesn’t), and where knowledge ends and wisdom begins.

There’s a similar problem I’ve faced when working with people for whom English isn’t the first language. Recently, a person used to reading and composing articles in the passive voice was livid after I’d changed numerous sentences in the article they’d submitted to the active voice. They really didn’t know why writing, and reading, in the active voice is better because they hadn’t ever had to use English for anything other than writing and reading scientific papers, where the passive voice is par for the course.

I had a bigger falling out with another author because I hadn’t been able to perfectly understand the point they were trying to make, in sentences of broken English, and used what I could infer to patch them up – except I was told I’d got most of them wrong. And they couldn’t implement my suggestions either because they couldn’t understand my broken Hindi.

These are people that I can’t ask to read more. The Wire and The Wire Science publish in English but, despite my (admittedly inflated) view of how good these publications are, I’ve no reason to expect anyone to learn a new language because they wish to communicate their ideas to a large audience. That’s a bigger beast of a problem, with tentacles snaking through colonialism, linguistic chauvinism, regional identities, even ideologies (like mine – to make no attempts to act on instructions, requests, etc. issued in Hindi even if I understand the statement). But at the same time there’s often too much lost in translation – so much so that (speaking from my experience in the last five years) 50% of all submissions written by authors for whom English isn’t the first language don’t go on to get published, even if it was possible for either party to glimpse during the editing process that they had a fascinating idea on their hands.

And to me, this is quite disappointing because one of my goals is to publish a more diverse group of writers, especially from parts of the country underrepresented thus far in the national media landscape. Then again, I acknowledge that this status quo axiomatically charges us to ensure there are independent media outlets with science sections and publishing in as many languages as we need. A monumental task as things currently stand, yes, but nonetheless, we remain charged.

Clarity and soundness

I feel a lot of non-science editors just switch off when they read science stuff.

A friend told me this earlier today, during yet another conversation about how many of the editorial issues that assail science and health journalism have become more pronounced during the pandemic (by dint of the pandemic being a science and health ‘event’). Even earlier, editors would switch off whenever they’d read science news, but then the news would usually be about a new study discussing something coffee could or couldn’t do to the heart.

While that’s worrying, the news was seldom immediately harmful, and lethal even more rarely. In a pandemic, on the other hand, bullshit that makes it to print hurts in two distinct ways: by making things harder for good health journalists to get through to readers with the right information and emphases, and of course by encouraging readers to do things that might harm them.

But does this mean editors need to know the ins and outs of the subject on which they’re publishing articles? This might seem like a silly question to ask but it’s often the reality in small newsrooms in India, where one editor is typically in charge of three or four beats at a time. And setting aside the argument that this arrangement is a product of complacency and not taking science news seriously more than resource constraints, it’s not necessarily a bad thing either.

For example, a political editor may not be able to publish incisive articles on, say, developments in the art world, but they could still help by identifying reliable news sources and tap their network to commission the right reporters. And if the organisation spends a lot more time covering political news, and with more depth, this arrangement is arguably preferable from a business standpoint.

Of course, such a setup is bound to be error-prone, but my contention is that it doesn’t deserve to be written off either, especially this year – when more than a few news publishers suddenly found themselves in the middle of a pandemic even as they couldn’t hire a health editor because their revenues were on the decline.

For their part, then, publishers can help minimise errors by being clear about what editors are expected to do. For example, a newsroom can’t possibly do a great job of covering science developments in the country without a science editor; axiomatically, non-science editors can only be expected to do a superficial job of standing in for a science editor.

This said, the question still stands: What are editors to do specifically, especially those suddenly faced with the need to cover a topic they’re only superficially familiar with? The answer to this question is important not just to help editors but also to maintain accountability. For example, though I’ve seldom covered health stories in the past, I also don’t get to throw my hands up as The Wire‘s science, health and environment editor when I publish a faulty story about, say, COVID-19. It is a bit of a ‘damned if you do, damned if you don’t’ situation, but it’s not entirely unfair either: it’s the pandemic, and The Wire can’t not cover it!

In these circumstances, I’ve found one particular way to mitigate the risk of damnation, so to speak, quite effective. I recently edited an article in which the language of a paragraph seemed off to me because it wasn’t clear what the author was trying to say, and I kept pushing him to clarify. Finally, after 14 emails, we realised he had made a mistake in the calculations, and we dropped that part of the article. More broadly, I’ve found that nine times out of ten, even pushbacks on editorial grounds can help identify and resolve technical issues. If I think the underlying argument has not been explained clearly enough, I send a submission back even if it is scientifically accurate or whatever.

Now, I’m not sure how robust this relationship is in the larger scheme of things. For example, this ‘mechanism’ will obviously fail when clarity of articulation and soundness of argument are not related, such as in the case of authors for whom English is a second language. For another, the omnipresent – and omnipotent – confounding factor known as unknown unknowns could keep me from understanding an argument even when it is well-made, thus putting me at risk of turning down good articles simply because I’m too dense or ignorant.

But to be honest, these risks are quite affordable when the choice is between damnation for an article I can explain and damnation for an article I can’t. I can (and do) improve the filter’s specificity/sensitivity 😄 by reading widely myself, to become less ignorant, and by asking authors to include a brief of 100-150 words in their emails clarifying, among other things, their article’s intended effect on the reader. And fortuitously, when authors are pushed to be clearer about the point they’re making, it seems they also tend to reflect on the parts of their reasoning that lie beyond the language itself.

Journalistic entropy

Say you need to store a square image 1,000 pixels wide to a side with the smallest filesize (setting aside compression techniques). The image begins with the colour #009900 on the left side and, as you move towards the right, gradually blends into #1e1e1e on the rightmost edge. Two simple storage methods come to mind: you could either encode the colour-information of every pixel in a file and store that file, or you could determine a mathematical function that, given the inputs #009900 and #1e1e1e, generates the image in question.

The latter method seems more appealing, especially for larger canvases of patterns that are composed by a single underlying function. In such cases, it should obviously be more advantageous to store the image as an output of a function to achieve the smallest filesize.

Now, in information theory (as in thermodynamics), there is an entity called entropy: it describes the amount of information you don’t have about a system. In our example, imagine that the colour #009900 blends to #1e1e1e from left to right save for a strip along the right edge, say, 50 pixels wide. Each pixel in this strip can assume a random colour. To store this image, you’d have to save it as an addition of two functions: ƒ(x, y), where x = #009900 and y = #1e1e1e, plus one function to colour the pixels lying in the 50-px strip on the right side. Obviously this will increase the filesize of the stored function.

Even more, imagine if you were told that 200,000 pixels out of the 1,000,000 pixels in the image would assume random colours. The underlying function becomes even more clumsy: an addition of ƒ(x, y) and a function R that randomly selects 200,000 pixels and then randomly colours them. The outputs of this function R stands for the information about the image that you can’t have beforehand; the more such information you lack, the more entropy the image is said to have.

The example of the image was simple but sufficiently illustrative. In thermodynamics, entropy is similar to randomness vis-à-vis information: it’s the amount of thermal energy a system contains that can’t be used to perform work. From the point of view of work, it’s useless thermal energy (including heat) – something that can’t contribute to moving a turbine blade, powering a motor or motivating a system of pulleys to lift weights. Instead, it is thermal energy motivated by and directed at other impetuses.

As it happens, this picture could help clarify, or at least make more sense of, a contemporary situation in science journalism. Earlier this week, health journalist Priyanka Pulla discovered that the Indian Council of Medical Research (ICMR) had published a press release last month, about the serological testing kit the government had developed, with the wrong specificity and sensitivity data. Two individuals she spoke to, one from ICMR and another from the National Institute of Virology, Pune, which actually developed the kit, admitted the mistake when she contacted them. Until then, neither organisation had issued a clarification even though it was clear both individuals are likely to have known of the mistake at the time the release was published.

Assuming for a moment that this mistake was an accident (my current epistemic state is ‘don’t know’), it would indicate ICMR has been inefficient in the performance of its duties, forcing journalists to respond to it in some way instead of focusing on other, more important matters.

The reason I’m tending to think of such work as entropy and not work per se is such instances, whereby journalists are forced to respond to an event or action characterised by the existence of trivial resolutions, seem to be becoming more common.

It’s of course easier to argue that what I consider trivial may be nontrivial to someone else, and that these events and actions matter to a greater extent than I’m willing to acknowledge. However, I’m personally unable to see beyond the fact that an organisation with the resources and, currently, the importance of ICMR shouldn’t have had a hard time proof-reading a press release that was going to land in the inboxes of hundreds of journalists. The consequences of the mistake are nontrivial but the solution is quite trivial.

(There is another feature in some cases: of the absence of official backing or endorsement of any kind.)

So as such, it required work on the part of journalists that could easily have been spared, allowing journalists to direct their efforts at more meaningful, more productive endeavours. Here are four more examples of such events/actions, wherein the non-triviality is significantly and characteristically lower than that attached to formal announcements, policies, reports, etc.:

  1. Withholding data in papers – In the most recent example, ICMR researchers published the results of a seroprevalence survey of 26,000 people in 65 districts around India, and concluded that the prevalence of the novel coronavirus was 0.73% in this population. However, in their paper, the researchers include neither a district-wise breakdown of the data nor the confidence intervals for each available data-point even though they had this information (it’s impossible to compute the results the researchers did without these details). As a result, it’s hard for journalists to determine how reliable the results are, and whether they really support the official policies regarding epidemic-control interventions that will soon follow.
  2. Publishing faff – On June 2, two senior members of the Directorate General of Health services, within India’s Union health ministry, published a paper (in a journal they edited) that, by all counts, made nonsensical claims about India’s COVID-19 epidemic becoming “extinguished” sometime in September 2020. Either the pair of authors wasn’t aware of their collective irresponsibility or they intended to refocus (putting it benevolently) the attention of various people towards their work, turning them away from the duo deemed embarrassing or whatever. And either way, the claims in the paper wound their way into two news syndication services, PTI and IANS, and eventually onto the pages of a dozen widely-read news publications in the country. In effect, there were two levels of irresponsibility at play: one as embodied by the paper and the other, by the syndication services’ and final publishers’ lack of due diligence.
  3. Making BS announcements – This one is fairly common: a minister or senior party official will say something silly, such as that ancient Indians invented the internet, and ride the waves of polarising debate, rapidly devolving into acrimonious flamewars on Twitter, that follow. I recently read (in The Washington Post I think, but I can’t find the link now) that it might be worthwhile for journalists to try and spend less time on fact-checking a claim than it took someone to come up with that claim. Obviously there’s no easy way to measure the time some claims took to mature into their present forms, but even so, I’m sure most journalists would agree that fact-checking often takes much longer than bullshitting (and then broadcasting). But what makes this enterprise even more grating is that it is orders of magnitude easier to not spew bullshit in the first place.
  4. Conspiracy theories – This is the most frustrating example of the lot because, today, many of the originators of conspiracy theories are television journalists, especially those backed by government support or vice versa. While fully acknowledging the deep-seated issues underlying both media independence and the politics-business-media nexus, numerous pronouncements by so many news anchors have only been akin to shooting ourselves in the foot. Exhibit A: shortly after Prime Minister Narendra Modi announced the start of demonetisation, a beaming news anchor told her viewers that the new 2,000-rupee notes would be embedded with chips to transmit the notes’ location real-time, via satellite, to operators in Delhi.

Perhaps this entropy – i.e. the amount of journalistic work not available to deal with more important stories – is not only the result of a mischievous actor attempting to keep journalists, and the people who read those journalists, distracted but is also assisted by the manifestation of a whole industry’s inability to cope with the mechanisms of a new political order.

Science journalism itself has already experienced a symptom of this change when pseudoscientific ideas became more mainstream, even entering the discourse of conservative political groups, including that of the BJP. In a previous era, if a minister said something, a reporter was to drum up a short piece whose entire purpose was to record “this happened”. And such reports were the norm and in fact one of the purported roots of many journalistic establishments’ claims to objectivity, an attribute they found not just desirable but entirely virtuous: those who couldn’t be objective were derided as sub-par.

However, if a reporter were to simply report today that a minister said something, she places herself at risk of amplifying bullshit to a large audience if what the minister said was “bullshit bullshit bullshit”. So just as politicians’ willingness to indulge in populism and majoritarianism to the detriment of society and its people has changed, so also must science journalism change – as it already has with many publications, especially in the west – to ensure each news report fact-checks a claim it contains, especially if it is pseudoscientific.

In the same vein, it’s not hard to imagine that journalists are often forced to scatter by the compulsions of an older way of doing journalism, and that they should regroup on the foundations of a new agreement that lets them ignore some events so that they can better dedicate themselves to the coverage of others.

Featured image credit: Татьяна Чернышова/Pexels.

Poor journalism is making it harder for preprints

There have been quite a few statements by various scientists on Twitter who, in pointing to some preprint paper’s untenable claims, point to the manuscript’s identity as a preprint paper as well. This is not fair, as I’ve argued many times before. A big part of the problem here is bad journalism. Bad preprint papers are a problem not because their substance is bad but because people who are not qualified to understand why it is bad read it and internalise its conclusions at face value.

There are dozens of new preprint papers uploaded onto arXiv, medRxiv and bioRxiv every week making controversial arguments and/or arriving at far-fetched conclusions, often patronising to the efforts of the subject’s better exponents. Most of them (at least according to what I know of preprints on arXiv) are debated and laid to rest by scientists familiar with the topics at hand. No non-expert is hitting up arXiv or bioRxiv every morning looking for preprints to go crazy on. The ones that become controversial enough to catch the attention of non-experts have, nine times out of then, been amplified to that effect by a journalist who didn’t suitably qualify the preprint’s claims and simply published it. Suddenly, scores (or more) of non-experts have acquired what they think is refined knowledge, and public opinion thereafter goes against the scientific grain.

Acknowledging that this collection of events is a problem on many levels, which particular event would you say is the deeper one?

Some say it’s the preprint mode of publishing, and when asked for an alternative, demand that the use of preprint servers be discouraged. But this wouldn’t solve the problem. Preprint papers are a relatively new development while ‘bad science’ has been published for a long time. More importantly, preprint papers improve public access to science, and preprints that contain good science do this even better.

To making sweeping statements against the preprint publishing enterprise because some preprints are bad is not fair, especially to non-expert enthusiasts (like journalists, bloggers, students) in developing countries, who typically can’t afford the subscription fees to access paywalled, peer-reviewed papers. (Open-access publishing is a solution too but it doesn’t seem to feature in the present pseudo-debate nor does it address important issues that beset itself as well as paywalled papers.)

Even more, if we admitted that bad journalism is the problem, as it really is, we achieve two things: prevent ‘bad science’ from reaching the larger population and retain access to ‘good science’.

Now, to the finer issue of health- and medicine-related preprints: Yes, acting based on the conclusions of a preprint paper – such as ingesting an untested drug or paying too much attention to an irrelevant symptom – during a health crisis in a country with insufficient hospitals and doctors can prove deadlier than usual. But how on Earth could a person have found that preprint paper, read it well enough to understand what it was saying, and act on its conclusions? (Put this way, a bad journalist could be even more to blame for enabling access to a bad study by translating its claims to simpler language.)

Next, a study published in The Lancet claimed – and thus allowed others to claim by reference – that most conversations about the novel coronavirus have been driven by preprint papers. (An article in Ars Technica on May 6 carried this provocative headline, for example: ‘Unvetted science is fuelling COVID-19 misinformation’.) However, the study was based on only 11 papers. In addition, those who invoke this study in support of arguments directed against preprints often fail to mention the following paragraph, drawn from the same paper:

… despite the advantages of speedy information delivery, the lack of peer review can also translate into issues of credibility and misinformation, both intentional and unintentional. This particular drawback has been highlighted during the ongoing outbreak, especially after the high-profile withdrawal of a virology study from the preprint server bioRxiv, which erroneously claimed that COVID-19 contained HIV “insertions”. The very fact that this study was withdrawn showcases the power of open peer-review during emergencies; the withdrawal itself appears to have been prompted by outcry from dozens of scientists from around the globe who had access to the study because it was placed on a public server. Much of this outcry was documented on Twitter and on longer-form popular science blogs, signalling that such fora would serve as rich additional data sources for future work on the impact of preprints on public discourse. However, instances such as this one described showcase the need for caution when acting upon the science put forth by any one preprint.”

The authors, Maimuna Majumder and Kenneth Mandl, have captured the real problem. Lots of preprints are being uploaded every week and quite a few are rotten. Irrespective of how many do or don’t drive public conversations (especially on the social media), it’s disingenuous to assume this risk by itself suffices to cut access.

Instead, as the scientists write, exercise caution. Instead of spoiling a good thing, figure out a way to improve the reporting habits of errant journalists. Otherwise, remember that nothing stops an irresponsible journalist from sensationalising the level-headed conclusions of a peer-reviewed paper either. All it takes is to quote from a grossly exaggerated university press-release and to not consult with an independent expert. Even opposing preprints with peer-reviewed papers only advances a false balance, comparing preprints’ access advantage to peer-review’s gatekeeping advantage (and even that is on shaky ground).