Will this blog be online a hundred years from today?

For almost two weeks now, we at The Wire have been dealing with a complaint that someone from Maharashtra lodged against us with Amazon Web Services (AWS), our sites’ host, for allegedly copying one paragraph in one article sans consent from a source that the complainant allegedly owns, and thus violating AWS’s terms of use and becoming eligible – if found guilty – to have the offending webpage taken down. The paragraph is not plagiarised (I edited and published it) but the alleged source of the ‘original’ material is shady, and there’s reason to believe a deeper malice could be at work, as I’ve explained in an article for The Wire.

The matter is still unresolved: the AWS abuse team has been emailing us almost every day asking us to tell them what we’ve done to ‘address’ the complaint, ignoring the proof we sent them showing that the article couldn’t possibly have been plagiarised (we shared the Google Doc on which the article was composed and edited from scratch, with date and timestamps). The abuse team remains unsatisfied and would simply like us to act, whatever that means. From my point of view, it seems like AWS doesn’t have the room to consider that the complaint could be baseless. I also think that any organisation that doesn’t know or want to deal with editorial complaints shouldn’t receive editorial complaints in the first place. Otherwise, you have a situation in which an unknown private entity can allege to a tech company that one of its clients has violated the Digital Millennium Copyright Act (DMCA) – the overreaching American legal instrument that is the principal blunt weapon in this episode – forcing the tech company to bear down on the client to make the problem go away, without pausing for a moment to think if it’s been conned into becoming an agent of harassment. But why would it, considering so many tech companies registered in the US actually benefit from the overreaching character of the DMCA.

(It’s doubly ridiculous when these agents are Indian and based in India, and who may not be aware of the full story of the DMCA but are required by their contracts of employment to enforce it.)

The awfulness of this entire episode, still ongoing, strikes to me at the heart of questions about who gets to access the internet and how. The AWS abuse team has told us on more than one occasion that if the matter isn’t resolved to AWS’s satisfaction, they will have to remove the offending webpage from their servers. Obviously we can find a new host, but whichever host we find, the problem remains: one of the many mediators of our access to the internet, starting from the internet service provider, is the entity hosting those websites. And not just any entity but a predominantly American one, and therefore obligated to enforce the terms of the DMCA. I don’t have to point to any numbers to claim, safely, that a vast majority of the internet traffic today pings the servers of websites hosted by AWS, Google Cloud Platform and Microsoft Azure. Recently, Tim Bray blogged about what the consequences might look like if the biggest of AWS’s 24 datacentre “regions” – called simply us-east-1 – went offline. It would be an unmitigated catastrophe.

My blog has been hosted with/on WordPress for 13 years now and I’ve seen a lot of competing platforms come and go.[1] (My very first blog was hosted by Xanga before I moved, a few months later, to WordPress.) A lot of people who like to talk or blog about blogging have expressed dissatisfaction with how some platforms “don’t talk to each other”, that they’re fans of the Quiet Web[2] or that static sites are the way to go for the speed, security and controllability. But to me, all these concerns pale compared to the question of whether a platform will actually stay online. One alternative I’ve been referred to is micro.blog – looks nice and has an agenda that some bloggers seem to love, but will it stay online? I don’t know. It’s easier for me to believe a) that WordPress will stay online because it has been online for 15 years now – which is a long time in the Internet Universe – and because it has been both profitable and conscientious about what it does; and b) that AWS will stay online because its market capitalisation and revenue mean it’s just too big to fail at this point (as Bray has also written). Heck, of all the blogging platforms that have come and gone, one of the longest-lived has been Google’s Blogger. Google clearly didn’t spend much time on it after a point but Blogger is still around, as are the bloggers who continue to publish there. And to my mind the ‘Persistent Web’ – a place that convinces you that it’s going to be around for a long time – is a better place to be (see ref. 2).

[1] One of the platforms that I was really sad to see die was Posterous, which Twitter bought from the guys who built it and then killed it. These guys subsequently created Posthaven, and pledged that it would never get bought or be killed as long as at least one blogger paid to use it – except the guys have added no new features since 2017 nor updated the blog. Tumblr is pretty much a ghost city now even though its been bought by the foundation that runs WordPress, and even though it attracted a great deal of negative attention for its erotic blogs. Typed, made by a company that became profitable by building apps for Apple devices, was ridiculously short-lived. Silvrback‘s founder sold it a few years ago to some mid-level management professor who’s modified it – badly – back to the early 2000s. Svbtle is still around, and has a pledge like Posthaven’s, but is both extremely minimal in terms of its features and fairly opaque about how it’s doing as a company; also, it appears to be hosted on AWS. Medium‘s mood is just awful and doesn’t seem trustworthy as a company either, especially if you’re particular – as I am – but about not putting in with enterprises that treat editorial people badly. This is also why I’m put off of Ghost, whose maker John O’Nolan has seemed quite full of himself on occasion. Ghost itself is a great product, although it started off as a blogging company only to change direction to become a publishing company, leaving WordPress – which it sought to usurp as every blogger’s platform of choice – to dominate the blogging space. Squarespace is the sole long-lived, equally legitimate alternative to WordPress, but it doesn’t offer a self-hosted version. I could go on.

[2] Brian Koberlein writes here that he defines the Quiet Web thus: “Exclude any page that has ads. Remove them if they use Google Analytics or Google Fonts. Remove them if they use scripts or trackers. It’s a hard filter that blocks the most popular sites. Forget YouTube, Facebook, Reddit, or Twitter. Forget the major news sites. So what remains? … Most personal websites don’t pass the test. They are either ad-driven or managed on platforms like Blogger or WordPress. But the quiet personal sites are diverse and interesting.” I think he overlooks a huge part of the internet here, comprising websites that self-host WordPress instead of using it on WordPress.com. Many of the features he ascribes to the Quiet Web can be built with WordPress – including this site you’re reading. As celebrated web-designer Jeffrey Zeldman tech blogger John Gruber has written, WordPress shouldn’t be blamed for many of its users’ awful design choices. (Edited at 8:43 am on October 27, 2021, to attribute that comment to Gruber instead of Zeldman.)

So when someone asks if my blog will be online and accessible a hundred years from now, I’d like the answer to be ‘yes’. And while I don’t like it, AWS is going to remain one of the options to make that happen with little effort on my part. I’m not a programmer except in the tinkering sense, and still struggle to understand how websites really work (esp. beyond the application layer). Pertinently this means I’d much rather host my blog, which is invaluable to me, with an entity that knows what it’s doing rather than try to cobble something together myself that could well break or be exploited a day later. And this in turn is why I’m really going to stick with WordPress, which continues to be an excellent alternative to every other similar option, with my fingers firmly crossed that the people managing it continue to do so the way they’re doing it currently.

Broken clocks during the pandemic

Proponents of conspiracy theories during the pandemic, at least in India, appear to be like broken clocks: they are right by coincidence, without the right body of evidence to back their claims. Two of the most read articles published by The Wire Science in the last 15 months have been the fact-checks of Luc Montagnier’s comments on the two occasions he spoke up in the French press. On the first, he said the novel coronavirus couldn’t have evolved naturally; the second, he insisted mass vaccination was a big mistake. The context in which Montagnier published his remarks evolved considerably between the two events, and it tells an important story.

When Montagnier said in April 2020 that the virus was lab-made, the virus’s spread was just beginning to accelerate in India, Europe and the US, and the proponents of the lab-leak hypothesis to explain the virus’s origins had few listeners and were consigned firmly to the margins of popular discourse on the subject. In this environment, Montagnier’s comments stuck out like a sore thumb, and were easily dismissed.

But when Montagnier said in May 2021 that mass vaccination is a mistake, the context was quite different: in the intervening period, Nicholas Wade had published his article on why we couldn’t dismiss the lab-leak hypothesis so quickly; the WHO’s missteps were more widely known; China’s COVID-19 outbreak had come completely under control (actually or for all appearances); many vaccine-manufacturers’ immoral and/or unethical business practices had come to light; more people were familiar with the concept and properties of viral strains; the WHO had filed its controversial report on the possible circumstances of the virus’s origins in China; etc. As a result, speaking now, Montagnier wasn’t so quickly dismissed. Instead, he was, to many observers, the man who had got it right the first time, was brave enough to stick his neck out in support of an unpopular idea, and was speaking up yet again.

The problem here is that Luc Montagnier is a broken clock – in the way even broken clocks are right twice a day: not because they actually tell the time but because the time is coincidentally what the clock face is stuck at. On both occasions, the conclusions of Montagnier’s comments coincided with what conspiracists have been going on about since the pandemic’s start, but on both occasions, his reasoning was wrong. The same has been true of many other claims made during the pandemic. People have said things that have turned out to be true but they themselves have always been wrong, whenever they have been wrong, because their particular reasons for something to be true were wrong.

That is, unless you can say why you’re right, you’re not right. Unless you can explain why the time is what it is, you’re not a clock!

Montagnier’s case also illuminates a problem with soothsaying: if you wish to be a prophet, it is in your best interests to make as many predictions as possible – to increase the odds of reality coinciding with at least one prediction in time. And when such a coincidence does happen, it doesn’t mean the prophet was right; it means they weren’t wrong. There is a big difference between these positions, and which becomes pronounced when the conspiratorially-minded start incorporating every article published anywhere, from The Wire Science to The Daily Guardian, into their narratives of choice.

As the lab-leak hypothesis moved from the fringes of society to the centre and came mistakenly to conflate possibility with likelihood (i.e. zoonotic spillover and lab-leak are two valid hypotheses for the virus’s origins but they aren’t equally likely to be true), the conspiratorial proponents of the lab-leak hypotheses (the ones given to claiming Chinese scientists engineered the pathogen as a weapon, etc.) have steadily woven imaginary threads between the hypothesis and Indian scientists who opposed Covaxin’s approval, the Congress leaders who “mooted” vaccine hesitancy in their constituencies, scientists who made predictions that came to be wrong, even vaccines that were later found to have rare side-effects restricted to certain demographic groups.

The passage of time is notable here. I think adherents of lab-leak conspiracies are motivated by an overarching theory born entirely of speculation, not evidence, and who then pick and choose from events to build the case that the theory is true. I say ‘overarching’ because, to the adherents, the theory is already fully formed and true, and that pieces of it become visible to observers as and when the corresponding events play out. This could explain why time is immaterial to them. You and I know that Shahid Jameel and Gagandeep Kang cast doubt on Covaxin’s approval (and not Covaxin itself) after the time we were aware that Covaxin’s phase 3 clinical trials were only just getting started in December, and before Covishield’s side-effects in Europe and the US came to light (with the attendant misreporting). We know that at the time Luc Montagnier said the novel coronavirus was made in a lab, last year, we didn’t know nearly enough about the structural biology underlying the virus’s behaviour; we do now.

The order of events matters: we went from ignorance to knowledge, from knowing to knowing more, from thinking one thing to – in the face of new information – thinking another. But the conspiracy-theorists and their ideas lie outside of time: the order of events doesn’t matter; instead, to these people, 2021, 2022, 2023, etc. are preordained. They seem to be simply waiting for the coincidences to roll around.

An awareness of the time dimension (so to speak), or more accurately of the arrow of time, leads straightforwardly to the proper practice of science in our day-to-day affairs as well. As I said, unless you can say why you’re right, you’re not right. This is why effects lie in the future of causes, and why theories lie in the causal future of evidence. What we can say to be true at this moment depends entirely on what we know at this moment. If we presume what we can say at this moment to be true will always be true, we become guilty of dragging our theory into the causal history of the evidence – simply because we are saying that the theory will come true given enough time in which evidence can accrue.

This protocol (of sorts) to verify the truth of claims isn’t restricted to the philosophy of science, even if it finds powerful articulation there: a scientific theory isn’t true if it isn’t falsifiable outside its domain of application. It is equally legitimate and necessary in the daily practice of science and its methods, on Twitter and Facebook, in WhatsApp groups, every time your father, your cousin or your grand-uncle begins a question with “If the lab-leak hypothesis isn’t true…”.

The Wire Science is hiring

Location: Bengaluru or New Delhi

The Wire Science is looking for a sub-editor to conceptualise, edit and produce high-quality news articles and features in a digital newsroom.

Requirements

  • Good faculty with the English language
  • Excellent copy-editing skills
  • A strong news sense
  • A strong interest in new scientific findings
  • Know how to read scientific papers
  • Familiarity with concepts related to the scientific method and scientific publishing
  • Familiarity with popular social media platforms and their features
  • Familiarity with the WordPress content management system (CMS)
  • Ability to handle data (obtaining data, sorting and cleaning datasets, using tools like Flourish to visualise)
  • Strong reasoning skills
  • 1-3 years’ work experience
  • Optional: have a background in science or engineering

Responsibilities

  • Edit articles according to The Wire Science‘s requirements, within tight deadlines
  • Make editorial decisions in reasonable time and communicate them constructively
  • Liaise with our reporters and freelancers, and work together to produce stories
  • Work with The Wire Science‘s editor to develop ideas for stories
  • Compose short news stories
  • Work on multimedia rendering of published stories (i.e. convert text stories to audio/video stories)
  • Work with the tech and audience engagement teams to help produce and implement features

Salary will be competitive.

Dalit, Adivasi, OBC and minority candidates are encouraged to apply.

If you’re interested, please write to Vasudevan Mukunth at science@thewire.in. Mention you’re applying for The Wire Science sub-editor position in the subject line of your email. In addition to attaching your resumé or CV, please include a short cover letter in the email’s body describing why you think you should be considered.

If your application is shortlisted, we will contact you for a written test followed by an interview.

A Q&A about my job and science journalism

A couple weeks ago, some students from a university in South India got in touch to ask a few questions about my job and about science communication. The correspondence was entirely over email, and I’m pasting it in full below (with permission). I’ve edited a few parts in one of two ways – to make myself clearer or to hide sensitive information – and removed one question because its purpose was clarificatory.

1) What does your role as a science editor look like day to day?

My day as science editor begins at around 7 am. I start off by catching up on the day’s headlines and other news, especially all the major newspapers and social media channels. I also handle a part of The Wire Science‘s social media presence, so I schedule some posts in the first hour.

Then, from 8 am onwards, I begin going through the publishing schedule – which is a document I prepare on the previous evening, listing all the articles that writers are expected to file on that day, as well as what I need to edit/publish and in which position on the homepage. At 9.30 am, my colleagues and I get on a conference call to discuss the day’s top stories and to hear from our reporters on which stories they will be pursuing that day (and any stories we might be chasing ourselves). The call lasts for about an hour.

From 10.30-11 am onwards, I edit articles, reply to emails, commission new articles, discuss potential story ideas with some reporters, scientists and my colleagues, check on the news cycle every now and then, make sure the site is running smoothly, discuss changes or tweaks to be made to the front-end with our tech team, and keep an eye on my finances (how much I’ve commissioned for, who I need to pay, payment deadlines, pending allocations, etc.).

All of this ends at about 4.30 pm. I close my laptop at that point but I continue to have work until 6 pm or so, mostly in the form of emails and maybe some calls. The last thing I do is prepare the publishing schedule for the next day. Then I shut shop.

2) With leading global newspapers restructuring the copy desk, what are the changes the Indian newspapers have made in the copy desk after the internet boom?

I’m not entirely familiar with the most recent changes because I stopped working with a print establishment six years ago. When I was part of the editorial team at The Hindu, the most significant change related to the advent of the internet had less to do with the copy desk per se and more to do with the business model. At least the latter seemed more pressing to me.

But this said, in my view there is a noticeable difference between how one might write for a newspaper and for the web. So a more efficient copy-editing team has to be able to handle both styles, as well as be able to edit copy to optimise for audience engagement and readability both online and offline.

3) Indian publications are infamous for mistakes in the copy. Is this a result of competition for breaking news or a lack of knack for editing?

This is a question I have been asking myself since I started working. I think a part of the answer you’re looking for lies in the first statement of your question. Indian copy-editors are “infamous for mistakes” – but mistakes according to whom?

The English language came to India in different ways, it is not homegrown. British colonists brought English to India, so English took root in India as the language of administration. English is the de facto language worldwide for the conduct of science, so scientists have to learn it. Similarly, there are other ways in which the use of English has been rendered useful and important and necessary. English wasn’t all these things in and of itself, not without its colonial underpinnings.

So today, in India, English is – among other things – the language you learn to be employable, especially with MNCs or such. And because of its historical relationships, English is taught only in certain schools, schools that typically have mostly students from upper-caste/upper-class families. English is also spoken only by certain groups of people who may wish to secret it as a class symbol, etc. I’m speaking very broadly here. My point is that English is reserved typically for people who can afford it, both financially and socio-culturally. Not everyone speaks ‘good’ English (as defined by one particular lexicon or whatever) nor can they be expected to.

So what you may see as mistakes in the copy may just be a product of people not being fluent in English, and composing sentences in ways other than you might as a result. India has a contested relationship with English and that should only be expected at the level of newsrooms as well.

However, if your question had to do with carelessness among copy-editors – I don’t know if that is a very general problem (nor do I know what the issues might be in a newsroom publishing in an Indian language). Yes, in many establishments, the management doesn’t pay as much attention to the quality of writing as it should, perhaps in an effort to cut costs. And in such cases, there is a significant quality cost.

But again, we should ask ourselves as to whom that affects. If a poorly edited article is impossible to read or uses words and ideas carelessly, or twists facts, that is just bad. But if a poorly composed article is able to get its points across without misrepresenting anyone, whom does that affect? No one, in my opinion, so that is okay. (It could also be the case that the person whose work you’re editing sees the way they write as a political act of sorts, and if you think such an issue might be in play, it becomes important to discuss it with them.)

Of course, the matter of getting one’s point across is very subjective, and as a news organisation we must ensure the article is edited to the extent that there can be no confusion whatsoever – and edited that much more carefully if it’s about sensitive issues, like the results of a scientific study. And at the same time we must also stick to a word limit and think about audience engagement.

My job as the editor is to ensure that people are understood, but in order to help them be understood better and better, I must be aware of my own privileges and keep subtracting them from the editorial equation (in my personal case: my proficiency with the English language, which includes many Americanisms and Britishisms). I can’t impose my voice on my writers in the name of helping them. So there is a fine line here that editors need to tread carefully.

4) What are the key points that a science editor should keep in mind while dealing with copy?

Aside from the points I raised in my previous answer, there are some issues that are specific to being a good science editor. I don’t claim to be good (that is for others to say) – but based on what I have seen in the pages of other publications, I would only say that not every editor can be a science editor without some specific training first. This is because there are some things that are specific to science as an enterprise, as a social affair, that are not immediately apparent to people who don’t have a background in science.

For example, the most common issue I see is in the way scientific papers are reported – as if they are the last word on that topic. Many people, including many journalists, seem to think that if a scientific study has found coffee cures cancer, then it must be that coffee cures cancer, period. But every scientific paper is limited by the context in which the experiment was conducted, by the limits of what we already know, etc.

I have heard some people define science as a pursuit of the truth but in reality it’s a sort of opposite – science is a way to subtract uncertainty. Imagine shining a torch within a room as you’re looking for something, except the torch can only find things that you don’t want, so you can throw them away. Then you turn on the lights. Papers are frequently wrong and/or are updated to yield new results. This seldom makes the previous paper directly fraudulent or wrong; it’s just the way science works. And this perspective on science can help you think through what a science editor’s job is as well.

Another thing that’s important to know is that science progresses in incremental fashion and that the more sensational results are either extremely unlikely or simply misunderstood.

If you are keen on plumbing deeper depths, you could also consider questions about where authority comes from and how it is constructed in a narrative, the importance of indeterminate knowledge-states, the pros and cons of scientism, what constitutes scientific knowledge, how scientific publishing works, etc.

A science editor has to know all these things and ensure that in the process of running a newsroom or editing a publication, they don’t misuse, misconstrue or misrepresent scientific work and scientists. And in this process, I think it’s important for a science editor to not be considered to be subservient to the interests of science or scientists. Editors have their own goals, and more broadly speaking science communication in all forms needs to be seen and addressed in its own right – as an entity that doesn’t owe anything to science or scientists, per se.

5) In a country where press freedom is often sacrificed, how does one deal with political pieces, especially when there is proof against a matter concerning the government?

I’m not sure what you mean by “proof against a matter concerning the government.” But in my view, the likelihood of different outcomes depends on the business model. If, for example, you the publisher make a lot of money from a hotshot industrialist and his company, then obviously you are going to tread carefully when handling stories about that person or the company. How you make your money dictates who you are ultimately answerable to. If you make your money by selling newspapers to your readers, or collecting donations from them like The Wire does, you are answerable to your readers.

In this case, if we are handling a story in which the government is implicated in a bad way, we will do our due diligence and publish the story. This ‘due diligence’ is important: you need to be sure you have the requisite proof, that all parts of the story are reliable and verifiable, that you have documentary evidence of your claims, and that you have given the implicated party a chance to defend themselves (e.g. by being quoted in the story).

This said, absolute press freedom is not so simple to achieve. It doesn’t just need brave editors and reporters. It also needs institutions that will protect journalists’ rights and freedoms, and also shield them reliably from harm or malice. If the courts are not likely to uphold a journalist’s rights or if the police refuse proper protection when the threat of physical violence is apparent, blaming journalists for “sacrificing” press freedom is ignorant. There is a risk-benefit analysis worth having here, if only to remember that while the benefit of a free press is immense, the risks shouldn’t be taken lightly.

6) Research papers are lengthy and editors have deadlines. How do you make sure to communicate information with the right context for a wider audience?

Often the quickest way to achieve this is to pick your paper and take it to an independent scientist working in the same field. These independent comments are important for the story. But specific to your question, these scientists – if they have the time and are so inclined – can often also help you understand the paper’s contents properly, and point out potential issues, flaws, caveats, etc. These inputs can help you compose your story faster.

I would also say that if you are an editor looking for an article on a newly published research paper, you would be better off commissioning a reporter who is familiar, to whatever extent, with that topic. Obviously if you assign a business reporter to cover a paper about nanofluidic biosensors, the end result is going to be somewhere between iffy and disastrous. So to make sure the story has got its context right, I would begin by assigning the right reporter and making sure they’ve got comments from independent scientists in their copy.

7) What are some of the major challenges faced by science communicators and reporters in India?

This is a very important question, and I can’t hope to answer it concisely or even completely. In January this year, the office of the Principal Scientific Advisor to the Government of India organised a meeting with a couple dozen science journalists and communicators from around India. I was one of the attendees. Many of the issues we discussed, which would also be answers to your question, are described here.

If, for the purpose of your assignment, you would like me to pick one – I would go with the fact that science journalism, and science communication more broadly, is not widely acknowledged as an enterprise in its own right. As a result, many people don’t see the value in what science journalists do. A second and closely related issue is that scientists often don’t respond on time, even if they respond at all. I’m not sure of the extent to which this is an etiquette issue. But by calling it an etiquette issue, I also don’t want to overlook the possibility that some scientists don’t respond because they don’t think science journalism is important.

I was invited to attend the Young Investigators’ Meeting in Guwahati in March 2019. There, I met a big bunch of young scientists who really didn’t know why science journalism exists or what its purpose is. One of them seemed to think that since scientific papers pass through peer review and are published in journals, science journalists are wasting their time by attempting to discuss the contents of those papers with a general audience. This is an unnecessary barrier to my work – but it persists, so I must constantly work around or over it.

8) What are the consequences if a research paper has been misreported?

The consequence depends on the type and scope of misreporting. If you have consulted an independent scientist in the course of your reporting, you give yourself a good chance of avoiding reporting mistakes.

But of course mistakes do slip through. And with an online publication such as The Wire – if a published article is found to have a mistake, we usually correct the mistake once it has been pointed out to us, along with a clarification at the bottom of the article acknowledging the issue and recording the time at which the change was made. If you write an article that is printed and is later found to have a mistake, the newspaper will typically issue an erratum (a small note correcting a mistake) the next day.

If an article is found to have a really glaring mistake after it is published – and I mean an absolute howler – the article could be taken down or retracted from the newspaper’s record along with an explanation. But this rarely happens.

9) In many ways, copy editing disconnects you from your voice. Does it hamper your creativity as a writer?

It’s hard to find room for one’s voice in a news publication. About nine-tenths of the time, each of us is working on a news copy, in which a voice is neither expected nor can add much value of its own. This said, when there is room to express oneself more, to write in one’s voice, so to speak, copy-editing doesn’t have to remove it entirely.

Working with voices is a tricky thing. When writers pitch or write articles in which their voices are likely to show up, I always ask them beforehand as to what they intend to express. This intention is important because it helps me edit the article accordingly (or decide whether to edit it at all). The writer’s voice is part of this negotiation. Like I said before, my job as the editor is to make sure my writers convey their points clearly and effectively. And if I find that their voice conflicts with the message or vice versa, I will discuss it with them. It’s a very contested process and I don’t know if there is a black-and-white answer to your question.

It’s always possible, of course, if you’re working with a bad editor and they just remodel your work to suit their needs without checking with you. But short of that, it’s a negotiation.

Ads on The Wire Science

Sometime this week, but quite likely tomorrow, advertisements will begin appearing on The Wire Science. The Wire‘s, and by extension The Wire Science‘s, principal source of funds is donations from our readers. We also run ads as a way to supplement this revenue; they’re especially handy to make up small shortfalls in monthly donations. Even so, many of these ads look quite ugly – individually, often with a garish choice of colours, but more so all together, by the very fact that they’re advertisements, representing a business model often rightly blamed for the dilution of good journalism published on the internet.

But I offer all of these opinions as caveats because I’m quite looking forward to having ads on The Wire Science. At least one reason must be obvious: while The Wire‘s success itself, for being an influential and widely read, respected and shared publication that runs almost entirely on readers’ donations, is inspiring, The Wire Science as a niche publication focusing on science, health and the environment (in its specific way) has a long way to go before it can be fully reader funded. This is okay if only because it’s just six months old – and The Wire got to its current pride of place after more than four years, with six major sections and millions of loyal readers.

As things stand, The Wire Science receives its funds as a grant of sorts from The Wire (technically, it’s a section with a subdomain). We don’t yet have a section-wise breakdown of where on the site people donate from, so while The Wire Science also solicits donations from readers (at the bottom of every article), it’s perhaps best to assume it doesn’t funnel much. Against this background, the fact that The Wire Science will run ads from this week is worth celebrating for two reasons: 1. that it’s already a publication where ads are expected to bring in a not insubstantial amount of money, and 2. that a part of this money will be reinvested in The Wire Science.

I’m particularly excited about reason no. 1. Yes, ads suck, but I think that’s truer in the specific context of ads being the principal source of funds – when editors are subordinated to business managers and editorial decisions serve the bottomline. But our editorial standards won’t be diluted by the presence of ads because of ads’ relative contribution to our revenue mix. (I admit that psychologically it’s going to take some adjusting.) The Wire Science is already accommodated in The Wire‘s current outlay, which means ad revenue is opportunistic, and an opportunity in itself to commission an extra story now and then, get more readers to the site and have a fraction of them donate.

I hope you’ll be able to see it the same way, and skip the ad-blocker if you can. 🙂

Eight years

On June 1 last year, I wrote:

Today, I complete seven years of trying to piece together a picture of what journalism is and where I fit in.

Today, I begin my ninth year as a journalist. I’m happy to report I’m not so confused this time round, if only because in the intervening time, two things have taken shape that have allowed me to channel my efforts and aspirations better, leaving less room for at least some types of uncertainty.

The first is The Wire Science, which was born as an idea around August 2019 and launched as a separate website in February 2020. From The Wire‘s point of view, the vision backing the product is “to build a constituency for science journalism – of contributors as well as readers – and drive a science journalism ecosystem.”

For me, this is in addition an opportunity to publish high-quality science writing that breaks away from the instrumental narratives that dominate most journalistic science pieces in India today.

The second thing that took shape was our readers’ and supporters’ appreciation for The Wire‘s work in general. I like to think we’re slowly breaking even on this front, indicating that we’re doing something right.

On these notes of focus, progress and hope – even though the last 12 months have been terrible in many ways – I must say I do look forward to the next 12 months. I’m sure lots of things are going to go wrong, just as they’ve been going wrong, but for once it also feels like there are going to be meaningful opportunities to do something about them.

For coronavirus claims, there is a world between true and false

In high school, you must have learnt about Boolean algebra, possibly the most fascinating kind of algebra for its deceptive ease and simplicity. But thanks to its foundations in computer science, Boolean algebra – at least as we it learnt in school – is fixated with ‘true’ and ‘false’ states but not with the state of ‘don’t know’ that falls in between. This state may not have many applications as regards the functioning of logic gates but in the real world, it is quite important, especially when the truth threatens to be spun out of control.

Amitabh Bachchan recently published a video in which he delivered a monologue claiming that when a fly alights on human faeces containing traces of the new coronavirus, flies off and then alights on some food, the food could also be contaminated by the same virus. The Wire Science commissioned a fact-check from Dr Deepak Natarajan, a reputed (and thankfully opinionated) cardiologist in New Delhi. In his straightforward article, Dr Natarajan presents evidence from peer-reviewed papers to argue that while we know the new coronavirus does enter the faeces of an infected person, we don’t know anything about whether the virus remains viable, or capable of precipitating an infection. Second, we know nothing of the participation of flies either.

The thing to remember here is that, during a panic – or in a pre-panic situation that constantly threatens to devolve into a panic – society as such has an unusually higher uptake capacity for information that confirms their biases irrespective of whether it is true. This property, so to speak, amplifies the importance of ‘not knowing’.

Thanks to scientism, there is a common impression among many experts and most non-experts that science has, or could have, the answers to all questions that could ever be asked. So when a scientist says she does not know something, there is a pronounced tendency among some groups of people – particularly, if not entirely, those who may not be scientistic themselves but believe science itself is scientistic – to assume the lack of an answer means the absence of an answer. That is, to think “If the scientist does not have an answer, then the science does not have an answer”, rather than “If the scientist does not have an answer, then the science does not have an answer yet” or even “If the scientist does not have an answer yet, she could have an answer later“.

This response at a time of panic or pre-panic forces almost all information to be classified as either ‘true’ or ‘false’, precluding the agency science still retains to move towards a ‘true’ or ‘false’ conclusion and rendering their truth-value to be a foregone conclusion. That is, we need evidence to say if something is true – but we also need to understand that saying something is ‘not true’ without outright saying it is ‘false’ is an important state of the truth itself.

It also forces the claimant to be more accountable. Here is one oversimplified but nonetheless illustrative example: When only ‘true’ and ‘false’ exist, any new bit of information has a 50% chance of being in one bin or the other. But when ‘not true/false’ or ‘don’t know’ is in the picture, new information has only a 33% chance of assuming one of the truth values. Further, the only truth value based on which people should be allowed to claim something is true is ‘true’. ‘False’ has never been good enough but ‘don’t know’ is not good enough either, which means that before we subject a claim to a test, it has a 66% chance of being ‘not true’.

Amitabh Bachchan’s mistake was to conflate ‘don’t know’ and ‘true’ without considering the possibility of ‘not true’, and has thus ended up exposing his millions of followers on Twitter to claims that are decidedly not true. As Dr Natarajan said, silence has never been more golden.

Another controversy, another round of blaming preprints

On February 1, Anand Ranganathan, the molecular biologist more popular as a columnist for Swarajya, amplified a new preprint paper from scientists at IIT Delhi that (purportedly) claims the Wuhan coronavirus’s (2019 nCoV’s) DNA appears to contain some genes also found in the human immunodeficiency virus but not in any other coronaviruses. Ranganathan also chose to magnify the preprint paper’s claim that the sequences’ presence was “non-fortuitous”.

To be fair, the IIT Delhi group did not properly qualify what they meant by the use of this term, but this wouldn’t exculpate Ranganathan and others who followed him: to first amplify with alarmist language a claim that did not deserve such treatment, and then, once he discovered his mistake, to wonder out loud about whether such “non-peer reviewed studies” about “fast-moving, in-public-eye domains” should be published before scientific journals have subjected them to peer-review.

https://twitter.com/ARanganathan72/status/1223444298034630656
https://twitter.com/ARanganathan72/status/1223446546328326144
https://twitter.com/ARanganathan72/status/1223463647143505920

The more conservative scientist is likely to find ample room here to revive the claim that preprint papers only promote shoddy journalism, and that preprint papers that are part of the biomedical literature should be abolished entirely. This is bullshit.

The ‘print’ in ‘preprint’ refers to the act of a traditional journal printing a paper for publication after peer-review. A paper is designated ‘preprint’ if it hasn’t undergone peer-review yet, even though it may or may not have been submitted to a scientific journal for consideration. To quote from an article championing the use of preprints during a medical emergency, by three of the six cofounders of medRxiv, the preprints repository for the biomedical literature:

The advantages of preprints are that scientists can post them rapidly and receive feedback from their peers quickly, sometimes almost instantaneously. They also keep other scientists informed about what their colleagues are doing and build on that work. Preprints are archived in a way that they can be referenced and will always be available online. As the science evolves, newer versions of the paper can be posted, with older historical versions remaining available, including any associated comments made on them.

In this regard, Ranganathan’s ringing the alarm bells (with language like “oh my god”) the first time he tweeted the link to the preprint paper without sufficiently evaluating the attendant science was his decision, and not prompted by the paper’s status as a preprint. Second, the bioRxiv preprint repository where the IIT Delhi document showed up has a comments section, and it was brimming with discussion within minutes of the paper being uploaded. More broadly, preprint repositories are equipped to accommodate peer-review. So if anyone had looked in the comments section before tweeting, they wouldn’t have had reason to jump the gun.

Third, and most important: peer-review is not fool-proof. Instead, it is a legacy method employed by scientific journals to filter legitimate from illegitimate research and, more recently, higher quality from lower quality research (using ‘quality’ from the journals’ oft-twisted points of view, not as an objective standard of any kind).

This framing supports three important takeaways from this little scandal.

A. Much like preprint repositories, peer-reviewed journals also regularly publish rubbish. (Axiomatically, just as conventional journals also regularly publish the outcomes of good science, so do preprint repositories; in the case of 2019 nCoV alone, bioRxiv, medRxiv and SSRN together published at least 30 legitimate and noteworthy research articles.) It is just that conventional scientific journals conduct the peer-review before publication and preprint repositories (and research-discussion platforms like PubPeer), after. And, in fact, conducting the review after allows it to be continuous process able to respond to new information, and not a one-time event that culminates with the act of printing the paper.

But notably, preprint repositories can recreate journals’ ability to closely control the review process and ensure only experts’ comments are in the fray by enrolling a team of voluntary curators. The arXiv preprint server has been successfully using a similar team to carefully eliminate manuscripts advancing pseudoscientific claims. So as such, it is easier to make sure people are familiar with the preprint and post-publication review paradigm than to take advantage of their confusion and call for preprint papers to be eliminated altogether.

B. Those who support the idea that preprint papers are dangerous, and argue that peer-review is a better way to protect against unsupported claims, are by proxy advocating for the persistence of a knowledge hegemony. Peer-review is opaque, sustained by unpaid and overworked labour, and dispenses the same function that an open discussion often does at larger scale and with greater transparency. Indeed, the transparency represents the most important difference: since peer-review has traditionally been the demesne of journals, supporting peer-review is tantamount to designating journals as the sole and unquestionable arbiters of what knowledge enters the public domain and what doesn’t.

(Here’s one example of how such gatekeeping can have tragic consequences for society.)

C. Given these safeguards and perspectives, and as I have written before, bad journalists and bad comments will be bad irrespective of the window through which an idea has presented itself in the public domain. There is a way to cover different types of stories, and the decision to abdicate one’s responsibility to think carefully about the implications of what one is writing can never have a causal relationship with the subject matter. The Times of India and the Daily Mail will continue to publicise every new paper discussing whatever coffee, chocolate and/or wine does to the heart, and The Hindu and The Wire Science will publicise research published in preprint papers because we know how to be careful and of the risks to protect ourselves against.

By extension, ‘reputable’ scientific journals that use pre-publication peer-review will continue to publish many papers that will someday be retracted.

An ongoing scandal concerning spider biologist Jonathan Pruitt offers a useful parable – that journals don’t always publish bad science due to wilful negligence or poor peer-review alone but that such failures still do well to highlight the shortcomings of the latter. A string of papers the work on which Pruitt led were found to contain implausible data in support of some significant conclusions. Dan Bolnick, the editor of The American Naturalist, which became the first journal to retract Pruitt’s papers that it had published, wrote on his blog on January 30:

I want to emphasise that regardless of the root cause of the data problems (error or intent), these people are victims who have been harmed by trusting data that they themselves did not generate. Having spent days sifting through these data files I can also attest to the fact that the suspect patterns are often non-obvious, so we should not be blaming these victims for failing to see something that requires significant effort to uncover by examining the data in ways that are not standard for any of this. … The associate editor [who Bolnick tasked with checking more of Pruitt’s papers] went as far back as digging into some of Pruitt’s PhD work, when he was a student with Susan Riechert at the University of Tennessee Knoxville. Similar problems were identified in those data… Seeking an explanation, I [emailed and then called] his PhD mentor, Susan Riechert, to discuss the biology of the spiders, his data collection habits, and his integrity. She was shocked, and disturbed, and surprised. That someone who knew him so well for many years could be unaware of this problem (and its extent), highlights for me how reasonable it is that the rest of us could be caught unaware.

Why should we expect peer-review – or any kind of review, for that matter – to be better? The only thing we can do is be honest, transparent and reflexive.