The passive is political

If Saruman is the stupid shit people say, I have often found Grima Wormtongue is the use of the passive voice. To the uninitiated: Wormtongue was a slimy fellow on Saruman’s side in The Lord of the Rings: The Two Towers. He was much, much less powerful compared to Saruman, but fed the wizard’s ego, lubricated the passage of his dubious ideas into action, and slipped poison into the ears and minds of those who would listen to him.

The passive is useful to attribute to others something you would rather not be the originator of yourself, but which you would like to be true. Or to invoke facts without also invoking the dubious credentials of the person or circumstance that birthed it. Or to dress up your ignorance in the ‘clinical-speak’ that the scientific literature prizes. Or to admit fewer avenues of disagreement. Or, in its most insidious form, to suggest that the message matters a lot more than the context.

Yes, sometimes the passive voice is warranted – often, in my experience, when the point is to maintain sharp focus on a particular idea, concept, etc. in a larger article. This condition is important: the writer or speaker needs to justify the use of the passive voice, in keeping with the deviation from normal that it is.

Of course, you could contend that the creator’s message is the creator’s own, and that they do get to craft it the way they wish. I would contend in return that this is absolutely true – but the question of passive v. active voice arises more pronouncedly in the matter of how the creator’s audience is directed to perceive that message. That is, the creator can use whatever voice they wish, but using one over the other (obviously) changes the meaning and, more importantly, the context they wish the reader to assume.

For example, writing “The ball was thrown” is both a statement that the ball was thrown and an indication to the reader that the identity of the thrower is not relevant.

And because of the specific ways in which the passive voice is bad, the creator effectively puts themselves in a position where the audience could accuse them of deliberately eliding important information. In fact, the creator would open themselves up to this line of inquiry, if not interrogation, even if the line is a dead-end or if the creator actually doesn’t deserve to be accused.

Even more specifically, the use of the passive voice is a loaded affair. I have encountered only a very small number of people writing in the mainstream press who actively shun the passive voice, in favour of the active, or at least have good reasons to adopt the passive. Most writers frequently adopt the passive – and passively so – without acknowledging that this voice can render the text in political shades even if the writer didn’t intend it.

I encountered an opinion of remarkable asininity a few minutes ago, which prompted this little note, and which also serves to illustrate my message.

“One aspect that needs to be considered,” “it is sometimes said,” “remain deprived of sex,” “it is believed that in June alone”. In a conversation with The Soufflé some two years ago, about why middle-aged and older men – those not of our generation, so to speak – harbour so many foolish ideas, he said one reason has to be that when these men sit in their living rooms and enter into lengthy monologues about what they believe, no one challenges them.

Of course, in an overwhelmingly patriarchal society, older men will only brook fewer challenges to their authority (or none at all). I think the passive voice is a syntactic choice that together with the fondness for it removes yet another challenge – one unique to the beautiful act of writing – that a creator may encounter during the act of creation, or at least which facilitates a way to create something that otherwise may not have survived the very act of creation.

In Katju’s case, for example, the second third instances of the passive voice could have given him pause. “It is sometimes said” in the active becomes “X has said” or “X says”, subsequently leading to the question of who ‘X’ is and whether their claim is still right, relevant and/or good.

As I mentioned earlier, the passive voice serves among other reasons to preclude the points or counts on which a reader may raise objections. However, writing – one way or another – is an act of decentralising or at least sharing power, the power inherent in the creator’s knowledge that is now available to others as well, more so in the internet age. Fundamentally, to write is to open the gates through which flow the opportunities for your readers to make decisions based on different bits and kinds of information. And in this exercise, to bar some of these gates can only be self-defeating.

Scientists drafting technical manuscripts – the documents I encounter most often that are brimming with the passive voice – may see less value in writing “X designed the experiment to do Y” than “the experiment was designed to go Y”. But I can think of no reason writing in the active would diminish the manuscript’s credentials, even if it may not serve to improve them either – at least not 99% of the time. I do think that 1% of the time, using the active voice by way of habit could help improve the way we do science, for example by allowing other researchers conducting meta-analyses to understand the role of human actions in the performance of an experiment or, perhaps, to discern the gender, age or qualification of those researchers most often involved in designing experiments v. performing them.

Then again, science is a decidedly, and unfortunately, asocial affair, and the ‘amount’ of behavioural change required to have scientists regularly privilege the active over the passive is high.

This shouldn’t be the case vis-à-vis writers writing for the mainstream press – a domain in which the social matters just as much as the scientific, but often much more. Here, to recall the famous words of Marshall McLuhan, the actor is often the act (perhaps simply reflecting our times – in which to be a passive bystander to acts of violence is to condone the violence itself).

And when Markandey Katju, no less than a former judge of the Supreme Court of India, invokes claims while suppressing their provenance, it quickly becomes a political choice. It is as if (I think) he is thinking, “I don’t care if this is true or not; I must find a way to make this point so that I can then go on to link rapes to unemployment, especially the unemployment brought on by the BJP’s decisions.”

I concede that the act of writing presents a weak challenge – but it is a challenge nonetheless, and which you can strengthen through habituation.

The costs of correction

I was slightly disappointed to read a report in the New York Times this morning. Entitled ‘Two Huge COVID-19 Studies Are Retracted After Scientists Sound Alarms’, it discussed the implications of two large studies of COVID-19 recently being retracted by two leading medical journals they were published in, the New England Journal of Medicine and The Lancet. My sentiment stemmed from the following paragraph and some after:

I don’t know if just these two retractions raise troubling questions as if these questions weren’t already being asked well before these incidents. The suggestion that the lack of peer-review, or any form of peer-review at all in its current form (opaque, unpaid) could be to blame is more frustrating, as is the article’s own focus on the quality of the databases used in the two studies instead of the overarching issue. Perhaps this is yet another manifestation of the NYT’s crisis under Trump? 😀

One of the benefits of the preprint publishing system is that peer-review is substituted with ‘open review’. And one of the purposes of preprints is that the authors of a study can collect feedback and suggestions before publishing in a peer-reviewed journal instead of accruing a significant correction cost post-publication, in the form of corrections or retractions, both of which continue to carry a considerable amount of stigma. So as such, the preprints mode ensures a more complete, a more thoroughly reviewed manuscript enters the peer-review system instead of vesting the entire burden of fact-checking and reviewing a paper on a small group of experts whose names and suggestions most journals don’t reveal, and who are generally unpaid for their time and efforts.

In turn, the state of scientific research is fine. It would simply be even better if we reduced the costs associated with correcting the scientific record instead of heaping more penalties on that one moment, as the conventional system of publishing does. ‘Conventional – which in this sphere seems to be another word for ‘closed-off’ – journals also have an incentive to refuse to publish corrections or perform retractions because they’ve built themselves up on claims of being discerning, thorough and reliable. So retractions are a black mark on their record. Elisabeth Bik has often noted how long journals take to even acknowledge entirely legitimate complaints about papers they’ve published, presumably for this reason.

There really shouldn’t be any debate on which system is better – but sadly there is.

Poor journalism is making it harder for preprints

There have been quite a few statements by various scientists on Twitter who, in pointing to some preprint paper’s untenable claims, point to the manuscript’s identity as a preprint paper as well. This is not fair, as I’ve argued many times before. A big part of the problem here is bad journalism. Bad preprint papers are a problem not because their substance is bad but because people who are not qualified to understand why it is bad read it and internalise its conclusions at face value.

There are dozens of new preprint papers uploaded onto arXiv, medRxiv and bioRxiv every week making controversial arguments and/or arriving at far-fetched conclusions, often patronising to the efforts of the subject’s better exponents. Most of them (at least according to what I know of preprints on arXiv) are debated and laid to rest by scientists familiar with the topics at hand. No non-expert is hitting up arXiv or bioRxiv every morning looking for preprints to go crazy on. The ones that become controversial enough to catch the attention of non-experts have, nine times out of then, been amplified to that effect by a journalist who didn’t suitably qualify the preprint’s claims and simply published it. Suddenly, scores (or more) of non-experts have acquired what they think is refined knowledge, and public opinion thereafter goes against the scientific grain.

Acknowledging that this collection of events is a problem on many levels, which particular event would you say is the deeper one?

Some say it’s the preprint mode of publishing, and when asked for an alternative, demand that the use of preprint servers be discouraged. But this wouldn’t solve the problem. Preprint papers are a relatively new development while ‘bad science’ has been published for a long time. More importantly, preprint papers improve public access to science, and preprints that contain good science do this even better.

To making sweeping statements against the preprint publishing enterprise because some preprints are bad is not fair, especially to non-expert enthusiasts (like journalists, bloggers, students) in developing countries, who typically can’t afford the subscription fees to access paywalled, peer-reviewed papers. (Open-access publishing is a solution too but it doesn’t seem to feature in the present pseudo-debate nor does it address important issues that beset itself as well as paywalled papers.)

Even more, if we admitted that bad journalism is the problem, as it really is, we achieve two things: prevent ‘bad science’ from reaching the larger population and retain access to ‘good science’.

Now, to the finer issue of health- and medicine-related preprints: Yes, acting based on the conclusions of a preprint paper – such as ingesting an untested drug or paying too much attention to an irrelevant symptom – during a health crisis in a country with insufficient hospitals and doctors can prove deadlier than usual. But how on Earth could a person have found that preprint paper, read it well enough to understand what it was saying, and act on its conclusions? (Put this way, a bad journalist could be even more to blame for enabling access to a bad study by translating its claims to simpler language.)

Next, a study published in The Lancet claimed – and thus allowed others to claim by reference – that most conversations about the novel coronavirus have been driven by preprint papers. (An article in Ars Technica on May 6 carried this provocative headline, for example: ‘Unvetted science is fuelling COVID-19 misinformation’.) However, the study was based on only 11 papers. In addition, those who invoke this study in support of arguments directed against preprints often fail to mention the following paragraph, drawn from the same paper:

… despite the advantages of speedy information delivery, the lack of peer review can also translate into issues of credibility and misinformation, both intentional and unintentional. This particular drawback has been highlighted during the ongoing outbreak, especially after the high-profile withdrawal of a virology study from the preprint server bioRxiv, which erroneously claimed that COVID-19 contained HIV “insertions”. The very fact that this study was withdrawn showcases the power of open peer-review during emergencies; the withdrawal itself appears to have been prompted by outcry from dozens of scientists from around the globe who had access to the study because it was placed on a public server. Much of this outcry was documented on Twitter and on longer-form popular science blogs, signalling that such fora would serve as rich additional data sources for future work on the impact of preprints on public discourse. However, instances such as this one described showcase the need for caution when acting upon the science put forth by any one preprint.”

The authors, Maimuna Majumder and Kenneth Mandl, have captured the real problem. Lots of preprints are being uploaded every week and quite a few are rotten. Irrespective of how many do or don’t drive public conversations (especially on the social media), it’s disingenuous to assume this risk by itself suffices to cut access.

Instead, as the scientists write, exercise caution. Instead of spoiling a good thing, figure out a way to improve the reporting habits of errant journalists. Otherwise, remember that nothing stops an irresponsible journalist from sensationalising the level-headed conclusions of a peer-reviewed paper either. All it takes is to quote from a grossly exaggerated university press-release and to not consult with an independent expert. Even opposing preprints with peer-reviewed papers only advances a false balance, comparing preprints’ access advantage to peer-review’s gatekeeping advantage (and even that is on shaky ground).

The chrysalis that isn’t there

I wrote the following post while listening to this track. Perhaps you will enjoy reading it to the same sounds. Otherwise, please consider it a whimsical recommendation. 🙂

I should really start keeping a log of different stories in the news all of which point to the little-acknowledged but only-evident fact that science – like so many things, including people – does not embody lofty ideals as much as the aspirations to those ideals. Nature News reported on January 31 that “a language analysis of titles and abstracts in more than 100,000 scientific articles,” published in the British Medical Journal (BMJ), had “found that papers with first and last authors who were both women were about 12% less likely than male-authored papers to include sensationalistic terms such as ‘unprecedented’, ‘novel’, ‘excellent’ or ‘remarkable’;” further, “The articles in each comparison were presumably of similar quality, but those with positive words in the title or abstract garnered 9% more citations overall.” The scientific literature, people!

Science is only as good as its exponents, and there is neither meaning nor advantage to assuming that there is such a thing as a science beyond, outside of and without these people. Doing so inflates science’s importance when it doesn’t deserve to be, and suppresses its shortcomings and prevents them from being addressed. For example, the BMJ study prima facie points to gender discrimination but it also describes a scientific literature that you will never find out is skewed, and therefore unrepresentative of reality, unless you acknowledge that it is constituted by papers authored by people of two genders, on a planet where one gender has maintained a social hegemony for millennia – much like you will never know Earth has an axis of rotation unless you are able to see its continents or make sense of its weather.

The scientific method describes a popular way to design experiments whose performance scientists can use to elucidate refined, and refinable, answers to increasingly complex questions. However, the method is an external object (of human construction) that only, and arguably asymptotically, mediates the relationship between the question and the answer. Everything that comes before the question and after the answer is mediated by a human consciousness undeniably shaped by social, cultural, economic and mental forces.

Even the industry that we associate with modern science – composed of people who trained to be scientists over at least 15 years of education, then went on to instruct and/or study in research institutes, universities and laboratories, being required to teach a fixed number of classes, publish a minimum number of papers and accrue citations, and/or produce X graduate students, while drafting proposals and applying for grants, participating in workshops and conferences, editing journals, possibly administering scientific work and consulting on policy – is steeped in human needs and aspirations, and is even designed to make room for them, but many of us non-scientists are frequently and successfully tempted to address the act of being a scientist as an act of transformation: characterised by an instant in time when a person changes into something else, a higher creature of sorts, like a larva enters a magical chrysalis and exits a butterfly.

But for a man to become a scientist has never meant the shedding of his identity or social stature; ultimately, to become a scientist is to terminate at some quasi-arbitrary moment the slow inculcation of well-founded knowledge crafted to serve a profitable industry. There is a science we know as simply the moment of discovery: it is the less problematic of the two kinds. The other, in the 21st century, is also funding, networking, negotiating, lobbying, travelling, fighting, communicating, introspecting and, inescapably, some suffering. Otherwise, scientific knowledge – one of the ultimate products of the modern scientific enterprise – wouldn’t be as well-organised, accessible and uplifting as it is today.

But it would be silly to think that in the process of constructing this world-machine of sorts, we baked in the best of us, locked out the worst of us, and threw the key away. Instead, like all human endeavour, science evolves with us. While it may from time to time present opportunities to realise one or two ideals, it remains for the most part a deep and truthful reflection of ourselves. This assertion isn’t morally polarised, however; as they say, it is what it is – and this is precisely why we must acknowledge failures in the practice of science instead of sweeping them under the rug.

One male scientist choosing more uninhibitedly to call his observation “unprecedented” than a female scientist might have been encouraged, among other things, by the peculiarities of a gendered scientific labour force and scientific enterprise, but many male scientists indulging just as freely in their evaluatory fantasies, such as they are, indicates a systemic corruption that transcends (but not escapes) science. The same goes for, as in another recent example, for the view that science is self-correcting. It is not because people are not, and they need to be pushed to be. In March 2019, for example, researchers uncovered at least 58 papers published in a six-week period whose authors had switched their desired outcomes between the start and end of their respective experiments to report positive, and to avoid reporting negative, results. When the researchers wrote to the authors as well as the editors of the journals that had published the problem papers, most of them denied there was an issue and refused to accept modifications.

Again, the scientific literature, people!

GM: confronting contradictions

There was a rash of articles published online recently – such as this one – about how the adult human mind, when confronted with information that contradicts its existing beliefs, does not reorganise what it knows but rejects the information’s truthfulness itself. During political conversations, this aspect of how we think and learn is bound to influence both the way opposing parties argue and the effects of propaganda on people. However, this notion’s impact seems to me to be more dire w.r.t. the issue of genetically modified (GM) crops.

Even when confronted with evidence in support of GM crops from the scientific literature, anti-GM activists reflexively take recourse in the deficiencies inherent in the scientific method, even if the deficiencies themselves are well-known.

In the specific example of GM mustard, there is no clear answer: the variant developed by Deepak Pental & co. has lower yield than some non-GM varieties but higher pest-resistance and is easier to breed. As a result, any single discussion of GM mustard’s eligibility to be a food crop (it hasn’t been released into the market yet) should address its pros and cons together instead of singling out its cons.

It would seem anti-GM activists are aware of this pressure because whenever scientists raise the pros of GM mustard, the activists’ first, and often last, line of reasoning is to quote even other studies. They are in turn rebutted by more studies, and the backs and forths go on until the entire debate becomes hinged on disagreements over minutiae. Granted, allowing bad GM crops to be commercialised can have deadly consequences. But this is also true of a score other enterprises in which we are happy to go along with approximations. Why the selective outrage?

It can’t be that farmer suicides touch a nerve because they are driven not just by crop failure but also by crop insurance, grain storage/distribution and pricing indices (such as the differences between rural CPI and MSP). Estimating these three factors is a task ridden with inaccuracies, many ill-supported assumptions and, frequently, corruption. However, we don’t seem to have raged against them with as much intensity as we have against GM mustard. We should have because of what Harish Damodaran eloquently expressed in The Indian Express on June 1:

Why is there so much opposition to a technology developed, after all, by Indian scientists in the public sector? Yes, the original patent for the [Barnase-Barstar-Bar hybridisation] system was filed by Plant Genetics Systems (now part of Bayer CropScience), but the CGMCP scientists improved upon it, for which they obtained patents (three US, two Canadian, one European Union and Australian each). Yet, we see no value in their work. The opponents — from the so-called Left or the Right — haven’t even bothered to visit the CGMCP, most accessibly located in Delhi University’s South Campus, while taking time out for anti-GMO jamborees in Brussels and The Hague. All this opposition is reflective of a unique Us and Them syndrome. For “us”, nothing but the latest would do. But farmers will have no right to grow GM mustard and assess its performance on the field.

The persuasion to constantly reject one study for another and our hypocritical stand on the ownership of GM crops together suggest that the pro/anti-GM debate is going to be settled by neither of these tactics. They are both the effects of a common flaw: ideological stubbornness. Even I – being pro-GM – am inclined to consign some farmers’ opposition to GM mustard to fear-mongering by activists. Sometimes I can find something easily refuted but at others, I struggle to change my mind even if the facts are evident. Anyway, while I can’t think of what it is that we can do to make ourselves less stubborn (each to her own, perhaps?), I do think it’s important we stay aware of our biases’ impact on our public conversations.

PS: If my post seems one-sided, addressing the behaviour of only anti-GM groups, one reason is that anti-GM expression in the mainstream as well as social media overshadows pro-GM expression. I’m also biased, of course.

Featured image credit: WikimediaImages/pixabay.

Podile, plagiarism, politics

On 17 January, Vemula hung himself, saying in his suicide note, “my birth is my fatal accident.” His death has rocked academia, with unabated protests on the Hyderabad campus and elsewhere. Even before the incident, Tandon and others openly referred to Appa Rao Podile, the university’s vice chancellor, as the famed institution’s first political appointee. Appa Rao has since left the university on indefinite leave.

This is from a February 3, 2016, blog post on Scientific American by Apoorva Mandavilli. I quote this to answer the question I’ve been asked throughout today from different people: “Why did you not publish your piece on three of Appa Rao’s papers containing plagiarised content earlier or later?” (The link to my piece, which appeared on The Wire, is here.) Appa Rao, as Mandavilli writes, is the university’s first VC to be appointed via the political route. In fact, according to The Times of India, he once campaigned for the Telugu Desam Party.

My piece was put together over three or four days – from the time I found out about the issues in Appa Rao’s papers to when I had all the information necessary to put my article together. Finally, though its publication date was postponed by a day thanks to the release of the Panama Papers, nothing else was taken into account apart from checking if The Wire had done due diligence before hitting the ‘Publish’ button. Having said all this, I ask: if Appa Rao is the first politically appointed VC at the University of Hyderabad, how can anything he does not be examined through a political lens?

Two of Alan Turing’s WW-II papers are now in the public domain

The Wire
May 21, 2015

A scientific paper written by Alan Turing, the brilliant computer scientist who cracked the Enigma code during the Second World War and bolstered Britain’s war efforts, was recently declassified by the British government and uploaded to the arXiv pre-print server. The paper’s entitled ‘The Applications of Probability to Cryptography’. It has Turing bringing to bear a style of reasoning that is absent in today’s statistics-heavy technical literature. It is both didactic and meticulous, and provides great insight into how Turing explored the cryptographic problems he was confronted with.

Consider:

When the whole evidence about some event is taken into account it may be extremely difficult to estimate the probability of the event, even very approximately, and it may be better to form an estimate based on a part of the evidence, so that the probability may be more easily calculated. This happens in cryptography in a very obvious way. The whole evidence when we are trying to solve a cipher is the complete traffic, and the events in question are the different possible keys, and functions of the keys. Unless the traffic is very small indeed the theoretical answer to the problem “What are the probabilities of the various keys? ” will be of the form “ The key . . . has a probability differing almost imperceptibly from 1 (certainty) and the other keys are virtually impossible”. But a direct attempt to determine these probabilities would obviously not be a practical method.

Here and there, he also admits he’s making guesses – some quite in the air and others not so much – of the sort that are inadmissible in the modern era of scientific publishing, where demands on researchers to be exact have driven many to fabricate results and fake conclusions. At one point, Turing writes, “This judgement is not entirely a guess; some rather insecure mathematical reasoning has gone into it”, prompting the popular statistician Andrew Gelman to quip on his blog: “He’s so goddamn reasonable. He’s everything I aspire to.”

The paper was uploaded to arXiv on May 18 together with another called ‘The Statistics of Repetitions’, both accompanied by editor’s notes that focused on what it was like to prepare manuscripts “at a time when typographical errors were corrected by hand, and mathematical expression handwritten into spaces left in the text”. The papers can be found hereand here.

Alan Turing’s claims to fame are many, ranging from deciphering the Enigma code used by the Nazis for encrypted communication, to defining the hypothetical Turing machine that’s influential in studies of computing, to predicting oscillating chemical reactions that were observed about a decade later. He was also gay at a time when homosexuality was a crime in the UK, and was chemically castrated when he refused to be sent to prison for fear he’d have to discontinue his work. He died of cyanide poisoning in 1954, barely 42 years old. In 2009, then British Prime Minister Gordon Brown issued an apology for the way Turing had been treated in his lifetime. Queen Elizabeth II pardoned him posthumously in 2013.

Most recently, he was brought to life in the blockbuster movie ‘The Imitation Game’ (2014), where he was played by Benedict Cumberbatch.