Sci-Hub isn’t just for scientists

Quite a few reporters from other countries have reached out to me, directly or indirectly, to ask about scientists to whom they can speak about how important Sci-Hub is to their work.

This attention to Sci-Hub is commendable, against the backdrop of the case in Delhi high court, filed by a consortium of three ‘legacy’ publishers of scientific papers, to have access to the website cutoff in India. There has been a groundswell of support for Sci-Hub in India, to no one’s surprise, considering the exorbitant paywalls that legacy publishers have erected in front of the papers they published. As a result, before Sci-Hub, it was impossible to access these papers outside of university libraries, and universities libraries themselves paid through the nose to keep up these journal subscription. But as in drug development, the development of scientific knowledge also happens on government money for the most part, so legacy publishers effectively often charge people twice: first when they publish papers written by scientists funded by the government and second when they need to lift the paywalls. The prices are also somewhat arbitrary, and often far removed from the costs publishers incur to publish each paper and/or to maintain their websites.

All this said, I think one more demographic is often missing in this conversation about the importance of Sci-Hub, as a result of which the latter is also limited, unfairly, to scientists. This is the community of science writers, reporters, editors, etc. I have used Sci-Hub regularly since 2013, either to identify papers that I can report on, write about cool scientific work on my blog or to select papers that are data-heavy and attempt to replicate their findings by writing code of my own. We must also highlight Sci-Hub’s benefits for journalists if only to remember that science can empower in more ways than one – including providing the means by which to test the validity of knowledge and reduce uncertainty, letting people learn the nature of facts and expertise based on what is considered valid or legitimate, and broadening access to the tools of science and the methods of proofs beyond those whose careers depend on it.


Middle fingers to the NYT and NYer

On April 18, celebrity journalist Ronan Farrow tweeted that he’d “spent two years digging” into the inside story of Pegasus, the spyware whose use by democratic governments around the world – including that of India – to spy on members of civil society, their political opponents and their dissenters was reported by an international collaboration that included The Wire. Yet Farrow credits only “Pegasus Project” in his story, once, and even then only to say that their reporting “reinforced the links between NSO Group and anti-democratic states” – mentioning nothing of what many of the journalists uncovered, probably to avoid admitting that his own piece overlaps significantly with the Project’s pieces – even as his own piece is cast as a revelatory investigation. Tell me, Mr Farrow, when you dug and dug, did you literally go underground? Or is this another form of your tendency to keep half the spotlight on yourself when your stories are published?

This is the second instance just this week of an influential American publication re-reporting something one or some other outlets in the “Orient” already published, in both cases a substantial amount of time earlier, while making no mention that they’re simply following up. But worse, the New York Times, the second offender, whose Stephanie Nolen and Karan Deep Singh reported on Amruta Byatnal’s report in Devex after two weeks and based on the same sources, wrote the story like it was breaking news. (The story: India wanted the WHO to delay the release of a report by 10 years because it said India had at least four-times as many deaths during the COVID-19 pandemic as its official record claimed.)

To make matters worse, India’s Union health ministry (in a government in which Prime Minister Narendra Modi calls all the shots) responded to the New York Times story but not to Devex (nor to The Wire Science‘s re-reporting, based on comments from other sources and with credit to Byatnal and Devex). This BJP government and its ministers like to claim that they’re better than the West on one occasion and that India needs to overcome its awe of the West on another, yet when Western publications (re)report developments discovered by journalists working through the minefield that is India’s landscape of stories, the ministers turn into meerkats.


For the journalists in between who first broke the stories, it’s a double whammy: American outlets that will brazenly steal their ideas and obfuscate memories of their initiative and the Indian government that will treat them as if they don’t exist.


MIT develops thermo-PV cell with 40% efficiency

Researchers at MIT have developed a heat engine that can convert heat to electricity with 40% efficiency. Unlike traditional heat engines – a common example is the internal combustion engine inside a car – this device doesn’t have any moving parts. Second, this device has been designed to work with a heat source that has a temperature of 1,900º to 2,400º C. Effectively, it’s like a solar cell that has been optimised to work with photons from vastly hotter sources – although its efficiency still sets it apart. If you know the history, you’ll understand why 40% is a big deal. And if you know a bit of optics and some materials science, you’ll understand how this device could be an important part of the world’s efforts to decarbonise its power sources. But first the history.

We’ve known how to build heat engines for almost two millennia. They were first built to convert heat, generated by burning a fuel, into mechanical energy – so they’ve typically had moving parts. For example, the internal combustion engine combusts petrol or diesel and harnesses the energy produced to move a piston. However, the engine can only extract mechanical work from the fuel – it can’t put the heat back. If it did, it would have to ‘give back’ the work it just extracted, nullifying the engine’s purpose. So once the piston has been moved, the engine dumps the heat and begins the next cycle of heat extraction from more fuel. (In the parlance of thermodynamics, the origin of the heat is called the source and its eventual resting place is called the sink.)

The inevitability of this waste heat keeps the heat engine’s efficiency from ever reaching 100% – and is further dragged down by the mechanical energy losses implicit in the moving parts (the piston, in this case). In 1820, the French mechanical engineer Nicolas Sadi Carnot derived the formula to calculate the maximum possible efficiency of a heat engine that works in this way. (The formula also assumes that the engine is reversible – i.e. that it can pump heat from a colder source to a hotter sink.) The number spit out by this formula is called the Carnot efficiency. No heat engine can have an energy efficiency that’s greater than its Carnot efficiency. The internal combustion engines of today have a Carnot efficiency of around 37%. A steam generator at a large power plant can go up to 51%. Against this background, the heat engine that the MIT team has developed has a celebration-worthy efficiency of 40%.

The other notable thing about it is the amount of heat with which it can operate. There are two potential applications of the new device that come immediately to mind: to use the waste heat from something that operates at 1,900-2,400º C and to take the heat from something that stores energy at those temperatures. There aren’t many entities in the world that maintain a temperature of 1,900-2,400º C as well as dump waste heat. Work on the device caught my attention after I spotted a press release from MIT. The release described one application that combined both possibilities in the form of a thermal battery system. Here, heat from the Sun is concentred in graphite blocks (using lenses and mirrors) that are located in a highly insulated chamber. When the need arises, the insulation can be removed to a suitable extent for the graphite to lose some heat, which the new device then converts to electricity.

On Twitter, user Scott Leibrand (@ScottLeibrand) also pointed me to a similar technology called FIRES – short for ‘Firebrick Resistance-Heated Energy Storage’, proposed by MIT researchers in 2018. According to a paper they wrote, it “stores electricity as … high-temperature heat (1000–1700 °C) in ceramic firebrick, and discharges it as a hot airstream to either heat industrial plants in place of fossil fuels, or regenerate electricity in a power plant.” They add that “traditional insulation” could limit heat leakage from the firebricks to less than 3% per day and estimate a storage cost of $10/kWh – “substantially less expensive than batteries”. This is where the new device could shine, or better yet enable a complete power-production system: by converting heat deliberately leaked from the graphite blocks or firebricks to electricity, at 40% efficiency. Even given the fact that heat transfer is more efficient at higher temperatures, this is impressive – more since such energy storage options are also geared for the long-term.

Let’s also take a peek at how the device works. It’s called a thermophotovoltaic (TPV) cell. The “photovoltaic” in the name indicates that it uses the photovoltaic effect to create an electric current. It’s closely related to the photoelectric effect. In both cases, an incoming photon knocks out an electron in the material, creating a voltage that then supports an electric current. In the photoelectric effect, the electron is completely knocked out of the material. In the photovoltaic effect, the electron stays within the material and can be recaptured. Second, in order to achieve the high efficiency, the research team wrote in its paper that it did three things. It’s a bunch of big words but they actually have straightforward implications, as I explain, so don’t back down.

1. “The usage of higher bandgap materials in combination with emitter temperatures between 1,900 and 2,400 °C” – Band gap refers to the energy difference between two levels. In metals, for example, when electrons in the valence band are imparted enough energy, they can jump across the band gap into the conduction band, where they can flow around the metal conducting electricity. The same thing happens in the TPV cell, where incoming photons can ‘kick’ electrons into the material’s conduction band if they have the right amount of energy. Because the photon source is a very hot object, the photons are bound to have the energy corresponding to the infrared wavelength of light – which carries around 1-1.5 electron-volt, or eV. So the corresponding TPV material also needs to have a bandgap of 1-1.5 eV. This brings us to the second point.

2. “High-performance multi-junction architectures with bandgap tunability enabled by high-quality metamorphic epitaxy” – Architecture refers to the configuration of the cell’s physical, electrical and chemical components and epitaxy refers to the way in which the cell is made. In the new TPV cell, the MIT team used a multi-junction architecture that allowed the device to ‘accept’ photons of a range of wavelengths (corresponding to the temperature range). This is important because the incoming photons can have one of two effects: either kick out an electron or heat up the material. The latter is undesirable and should be avoided, so the multi-junction setup to absorb as many photons as possible. A related issue is that the power output per unit volume of an object radiating heat scales according to the fourth power of its temperature. That is, if its temperature increases by x, its power output per volume will increase by x^4. Since the heat source of the TPV cell is so hot, it will have a high power output, thus again privileging the multi-junction architecture. The epitaxy is not interesting to me, so I’m skipping it. But I should note that electric cells like the current one aren’t ubiquitous because making them is a highly intricate process.

3. “The integration of a highly reflective back surface reflector (BSR) for band-edge filtering” – The MIT press release explains this part clearly: “The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold” – the BSR. “The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.”

While it seems obvious that technology like this will play an important part in humankind’s future, particularly given the attractiveness of maintaining a long-term energy store as well as the use of a higher-efficiency heat engine, the economics matter muchly. I don’t know how much the new TPV cell will cost, especially since it isn’t being mass-produced yet; in addition, the design of the thermal battery system will determine how many square feet of TPV cells will be required, which in turn will affect the cells’ design as well as the economics of the overall facility. This said, the fact that the system as a whole will have so few moving parts as well as the availability of both sunlight and graphite or firebricks, or even molten silicon, which has a high heat capacity, keep the lucre of MIT’s high-temperature TPVs alive.

Featured image: A thermophotovoltaic cell (size 1 cm x 1 cm) mounted on a heat sink designed to measure the TPV cell efficiency. To measure the efficiency, the cell is exposed to an emitter and simultaneous measurements of electric power and heat flow through the device are taken. Caption and credit: Felice Frankel/MIT, CC BY-NC-ND.


At last, physicists report finding the ‘fourth sign’ of superconductivity

Using an advanced investigative technique, researchers at Stanford University have found that cuprate superconductors – which become superconducting at higher temperatures than their better-known conventional counterparts – transition into this exotic state in a different way. The discovery provides new insights into the way cuprate superconductors work and eases the path to discovering a room-temperature superconductor one day.

A superconductor is a material that can transport an electric current with zero resistance. The most well-known and also better understood superconductors are certain metallic alloys. They transition from their ‘normal’ resistive state to the superconducting state when their temperature is brought to a very low value, typically a few degrees above absolute zero.

The theory that explains the microscopic changes that occur as the material transitions is called Bardeen-Cooper-Schrieffer (BCS) theory. As the material crosses its threshold temperature, called the critical temperature, BCS theory predicts four signatures of superconductivity. If these four signatures occur, we can be sure that the material has become superconducting.

First, the material’s resistivity collapses and its electrons begin to flow without any resistance through the bulk – the electronic effect.

Second, the material expels all magnetic fields within its bulk – the magnetic (a.k.a. Meissner) effect.

A magnet levitating above a high-temperature superconductor, thanks to the Meissner effect. Credit: Mai-Linh Doan/Wikimedia Commons, CC BY-SA 3.0

Third, the amount of heat required to excite electrons to an arbitrarily higher energy is called the electronic specific heat. This number is lower for superconducting electrons than for non-superconducting electrons – but it increases as the material is warmed, only to drop abruptly to the non-superconducting value at the critical temperature. This is the effect on the material’s thermodynamic behaviour.

Fourth, while the energies of the electrons in the non-superconducting state have a variety of values, in the superconducting state some energy levels become unattainable. This shows up as a gap in a chart mapping the energy values. This is the spectroscopic effect. (The prefix ‘spectro-‘ refers to anything that can assume a continuous series of values, on a spectrum.)

Conventional superconductors are called so simply because scientists discovered them first and they defined the convention: among other things, they transition from their non-superconducting to superconducting states at very low temperature. Their unconventional counterparts are the high-temperature superconductors, which were discovered in the late 1980s and which transition at temperatures greater than 77 K. And when they do, physicists have thus far observed the corresponding electronic, magnetic and thermodynamic effects – but not the spectroscopic one.

A new study, published on January 26, 2022, has offered to complete this record. And in so doing, the researchers have uncovered new information about how these materials transition into their superconducting states: it is not the way low-temperature superconductors do.

The research team, at Stanford, reportedly did this by studying the thermodynamic effect and connecting it to the material’s spectroscopic effect.

The deeper problem with zeroing in on the spectroscopic effect in high-temperature superconductors is that an electron energy gap shows up before the transition, when the material is not yet a superconductor, and persists into the superconducting phase.

First, recall that at the critical temperature, the electronic specific heat stops increasing and drops suddenly to the non-superconducting value. The specific heat is directly related to the amount of entropy in the system (energy in the system that can’t be harnessed to perform work). The entropy is in turn related to the spectral function – an equation that dictates which energy states the electrons can and can’t occupy. So by studying changes in the specific heat, the researchers can understand the spectroscopic effect.

Second, to study the specific heat, the researchers used a technique called angle-resolved photo-emission spectroscopy (ARPES). These are big words but they have a simple meaning. Photo-emission spectroscopy refers to a technique in which energy-loaded photons are shot into a target material, where they knock out those electrons that they have the energy for. Based on the energies of the electrons knocked out, their position and their momenta, scientists can piece together the properties of the electrons inside the material.

ARPES takes this a step further by also recording the angle at which the electrons are knocked out of the material. This provides an insight into another property of the superconductor. Specifically, another way in which cuprates differ from conventional superconductors is the way in which the electrons pair up. In the latter, the pairs break rotational symmetry, such that the energy required to break up the pair is not equal in all directions.

This affects the way the thermodynamic and spectral effects look in the data. For example, photons fired at certain angles will knock out more electrons from the material than photons incoming at other angles.

The angle-specific measurements of the specific-heat coefficient (y-axis) versus the temperature (x-axis). Credit:

Taking all this into account, the researchers reported that a cuprate superconductor called Bi-2212 (bismuth strontium calcium copper oxide) transitions to becoming a superconductor in two steps – unlike the single-step transition of low-temperature superconductors.

According to BCS theory, the electrons in a conventional superconductor are encouraged to overcome their mutual repulsion and bind to each other in pairs when two conditions are met: the material’s lattice – the grid of atomic nuclei – has a vibrational energy of a certain frequency and the material’s temperature is lowered. These electron pairs then move around the material like a fluid of zero viscosity, thus giving rise to superconductivity.

The Stanford team found that in Bi-2212, the electrons pair up with each other at around 120 K, but condense into the fluid-like state only at around 77 K. The former gives rise to an energy gap – i.e. the spectroscopic effect – even as the superconducting behaviour itself arises only at the 77-K mark, when the pairs condense.

A small sample of Bi-2212 The side is 1 mm long. Credit: James Slezak, Cornell Laboratory of Atomic and Solid State Physics, CC BY-SA 3.0

There are two distinct feats here: finding the spectroscopic effect and finding the two-step transition. Both – but the first more so – were the product of technological advancements. The researchers obtained their Bi-2212 samples, created with specific chemical compositions so as to help analyse the ARPES data, from their collaborators in Japan, and then studied it with two instruments capable of performing ARPES studies at Stanford: an ultraviolet laser and the Synchrotron Radiation Lightsource.

Makoto Hashimoto, a physicist at Stanford and one of the study’s authors, said in a press statement: “Recent improvements in the overall performance of those instruments were an important factor in obtaining these high-quality results. They allowed us to measure the energy of the ejected electrons with more precision, stability and consistency.”

The second finding, of the two-step transition, is important foremost because it is new knowledge of the way cuprate superconductors ‘work’ and because it tells physicists that they will have to achieve two things – instead of just one, as in the case of conventional, low-temperature superconductors – if they want to recreate the same effects in a different material.

As Zhi-Xun Shen, the researcher who led the study at Stanford, told Physics World, “This knowledge will ultimately help us make better superconductors in the future.”

Featured image: A schematic illustration of an ARPES setup. On the left is the head-on view of the manipulator holding the sample and at the centre is the side-on view. On the right is an electron energy analyser. Credit: Ponor/Wikimedia Commons, CC BY-SA 4.0.


Anonymity in journalism and a conflict of ethics

I wrote the following essay at the invitation of a journal in December 2020. (This was the first draft. There were additional drafts that incorporated feedback from a few editors.) It couldn’t be published because I had to back out of the commission owing to limitations of time and health. I formally withdrew my submission on April 11, 2022, and am publishing it in full below.

Anonymity in journalism and a conflict of ethics

Tiger’s dilemma

I once knew a person, whom I will call Tiger, who worked with the Government of India. Tiger was in a privileged position within the government, not very removed from the upper echelons in fact, and had substantial influence on policies and programmes lying in their domain. (Tiger himself was not a member of any political parties.) Tiger’s work was also commendable: their leadership from within the state had improved the working conditions of and opportunities for people in the corresponding fields, so much so that Tiger was generally well-regarded by their peers and colleagues around the country. Tiger had also produced high-quality work in their domain, which I say here to indicate Tiger’s all-round excellence.

But while Tiger ascended through government ranks, the Government of India itself was becoming more detestable – feeding communal discontentment, promoting pseudoscience, advancing crony capitalism and arresting/harassing dissidents. At various points in time, the actions and words of ministers and senior party leaders outright conflicted with the work and the spirit that Tiger and their department stood for – yet Tiger never spoke a word against the state or the party. As the government’s actions grew more objectionable, the more Tiger’s refusal to object became conspicuous.

I used to have trouble judging Tiger’s inaction because I had trouble settling a contest between two ethical loci: values versus outcomes. The question here was that, in the face of a dire threat, such as a vengeful government, how much could I ask of my compatriots? It is undeniably crucial to join protests on the streets and demonstrate the strength of numbers – but if the government almost always responds by having police tear-gas protesters or jail a few and keep them there on trumped-up charges under draconian laws for months on end, it becomes morally painful to insist that people join protests. I might wither under the demand of condemning anyone, but especially the less privileged, to such fates. (The more-privileged of course can and should be expected to do more, and fear the consequences of state viciousness less.)

If Tiger had spoken up against the prime minister or any of the other offending ministers, Tiger would have lost their position within the government, could in fact have become persona non grata in the state’s eyes, and become earmarked for further disparagement. As symbols go, speaking up against an errant government is a powerful one – especially when it originates from a person like Tiger. However, speaking up will still only be a symbol, and not an outcome. If Tiger had stayed silent to continue to retain their influential place within the government, there is a chance that Tiger’s department may have continued its good work. The implication here is that outcomes trump values.

Then again, I presume here that the power of symbols is predictable or even finite in any way, or that they are always inferior to action on the ground, so to speak. This need not be true. For example, if Tiger had spoken up, their peers could have been motivated to speak up as well, avalanching over time into a coordinated, collectivised refusal to cooperate with government initiatives that required their support. It is a remote possibility but it exists; more importantly, it is not for me to dismiss. And it is at least just as tempting to believe values trump outcomes, or certainly complement them.

Now, depending on which relationship is true – values over outcomes or vice versa – we still have to contend with the same defining question before we can draw a line between whom to forgive and whom to punish. Put another way, when confronted with deadly force, how much can you ask of your compatriots? There can’t be shame in bending like grasses against a punishing wind, but at the same time someone somewhere must grow a spine. Then again, not everyone may draw the line between these two sides at the same place. This is useful context to consider issues surrounding anonymity and pseudonymity in journalism today.

(Edit, April 11, 2022: I harboured a more charitable view of Tiger and Tiger’s work at the time I wrote this essay, in December 2020. I’m much less forgiving today and, considering the depths to which the Indian government’s pandemic response plunged, believe they ought to have spoken up on multiple occasions but chose not to.)

Anonymity in journalism

selective focus photography of magazines
Photo by brotiN biswaS on

Every now and then, The Wire and The Wire Science receive requests from authors to not have their names attached to their articles. In 2020, The Wire Science, which I edit, published at least three articles without a name or under a pseudonym. Anonymity as such has been commonly around for much longer vis-à-vis government officials and experts being quoted saying sensitive things, and individuals whose stories are worth sharing but whose identities are not. It is nearly impossible to regulate journalism, without ‘breaking’ it, from anywhere but the inside. As evasive as this sounds, what is in the public interest is often too fragile to survive the same accountability and transparency we demand of government, or even what the law offers to protect. So the channels to compose and transport such information should be able to be as private as individual liberties and ethical obligations can allow.

Anonymity is as a matter of principle possible, and journalists (should) have the liberty, and also the integrity, to determine who deserves it. It may help to view anonymity as a duty instead of as a right. For example, we have all come across many stories this year in which reporters quoted unnamed healthcare workers and government officials to uncover important details of the Government of India’s response to the country’s COVID-19 epidemic. Without presuming to know the nature of relationships between these ‘sources’ and the respective reporters, we can say they all likely share Tiger’s (erstwhile) dilemma: they are on the frontline and they are needed there, but if they speak up and have their identities known, they may lose their ability to stay there.

The state of defence reporting in India could offer an important contrast. Unlike health (although this could be changing), India’s defence has always been shrouded in secrecy, especially on matters of nuclear weapons, terrorist plots, military installations, etc. Not too long ago, one defence reporter began citing unnamed sources to write up a series of articles about a new chapter of terrorist activities in India’s north. A mutual colleague at the time told me he was unsettled by the series: while unnamed sources are not new, the colleague explained, this reporter almost never named anyone – except perhaps those making banal statements.

Many health-related institutions and activities in India need to abide by the requirements of the Right to Information Act, but defence has few such obligations. In such cases, there is no way for the consumers of journalism – the people at large – to ascertain the legitimacy of such reports and in fact have no option but to trust the reporter. But this doesn’t mean the reporter can do what they wish; there are some simple safeguards to prevent mistakes. One as ubiquitous as it is effective is to allow an offended party in the story to defend itself, with some caveats.

A notable example of such an incident from the last decade was the 2014 Rolling Stone investigation about an alleged incident of rape on the University of Virginia campus. The reporter had trusted her source and hid her identity in the article, using only the mononym ‘Jackie’. Jackie had alleged that she had been raped by a group of men during a fraternity party. However, other reporters subsequently noticed a series of inconsistencies that quickly snowballed into the alarming revelation that Jackie had probably fabricated the incident, and Rolling Stone had missed it. In this case, Rolling Stone itself claimed to have been duped, but managing editor Will Dana’s note to readers published after a formal investigation had wound up contains a telling passage:

“In trying to be sensitive to the unfair shame and humiliation many women feel after a sexual assault, we made a judgment – the kind of judgment reporters and editors make every day. We should have not made this agreement with Jackie and we should have worked harder to convince her that the truth would have been better served by getting the other side of the story.”

Another ‘defence’ is rooted in news literacy: as a reader, try when you can to consider multiple accounts of a common story, as reported by multiple outlets, and look for at least one independently verifiable detail. There must be something, but if there isn’t, consider it a signal that the story is at best located halfway between truth and fiction, awaiting judgment. Fortunately (in a way), science, environment and health stories frequently pass this test – or at least they used to. While an intrepid Business Standard reporter might have tracked down a crucial detail by speaking to an expert who wished to remain unnamed, someone at The Wire or The Hindu, or an enterprising freelance journalist, will soon have been able to get someone else on the record, or find a document in the public domain attesting to the truth of the claim.

Identity as privilege

A protester holds up a placard at a gathering in support of the Indian farmers’ agitation, Washington, DC, March 29, 2021. Credit: Gayatri Malhotra/Unsplash

I use the past-tense because, since 2014, the Bharatiya Janata Party (BJP) – which formed the national government then – has been vilifying any part of science that threatens the mythical history the party has sought to construct for itself and for the nation. The BJP is the ideological disciple of the Rashtriya Swayamsevak Sangh and the Vishwa Hindu Parishad, and soon after the BJP’s ascent, members of groups affiliated with these organisations have murdered at least three anti-superstition activists and others have disrupted many a gathering of scholars, even as senior ministers in government have embarked on a campaign to erode scientific temper, appropriate R&D activities into the party’s communal programme and degrade or destabilise the scope for research that is guided by researchers’ interests, in favour of that of bureaucrats.

Under the party-friendly vice-chancellorship of M. Jagadesh Kumar, the Jawaharlal Nehru University in New Delhi has slid from being a national jewel to being blanketed in misplaced suspicions of secessionist activity. In January, students affiliated with the BJP’s student-politics wing went on a violent spree within the JNU campus, assaulting students and damaging university property, while Kumar did nothing to stop them. In November, well-known professors of the university’s school of physical sciences alleged that Kumar was intervening in unlawful ways with the school’s administration. Moushumi Basu, secretary of the teachers’ association, called the incident a first, since many faculty members had assumed Kumar wouldn’t interfere with the school of physical sciences, being a physical-sciences teacher himself.

(Edit, April 11, 2022: Kumar was succeeded in February 2022 by Santishree Pandit, and at the end of the first week of April, members of the Akhil Bharatiya Vidyarthi Parishad assaulted JNU students on campus with stones over cooking non-vegetarian food on the occasion of Ram Navami.)

Shortly before India’s COVID-19 epidemic really bloomed, the Union government revoked the licence of the Manipal Institute of Virology to use foreign money to support its stellar, but in India insufficiently supported, research on viruses, on charges that remain unclear. The party’s government has confronted many other institutes with similar fates – triggering a chilling effect among scientists and pushing them further into their ivory towers.

In January 2020, I wrote about the unsettling case of a BJP functionary who had shot off an email asking university and institution heads to find out which of their students and faculty members had signed a letter condemning the Citizenship (Amendment) Act 2019. I discovered in the course of my reporting two details useful to understand the reasonable place of anonymous authorship in journalism. First, a researcher at one of the IISERs told me that the board of governors of their institute seemed to be amenable to the argument that since the institute receives funds via the education ministry (formerly the human resource development ministry), it does not enjoy complete autonomy. Second, while the Central Civil Services (Conduct) Rules 1964 do prevent employees of centrally funded institutions, including universities and research facilities, from commenting negatively on the government, they are vague at best about whether employees can protest on issues concerning their rights as citizens of the country.

These two conditions together imply that state-funded practitioners of scientific activities – from government hospital coroners to spokespersons of billion-dollar research facilities, from PhD scholars to chaired professors – can be arbitrarily denied opportunities to engage as civilians on important issues concerning all people, even as their rights on paper seem straightforward.

But even under unimaginable pressure to conform, I have found that many of India’s young scientists are still willing to – even insistent on – speaking up, joining public protests, writing and circulating forthright letters, championing democratic and socialist programmes, and tipping off journalists like myself to stories that need to be told. This makes my job as a journalist much easier, but I can’t treat their courage as permission to take advantage. They are still faced with threats whose full magnitude they may comprehend only later, or may be unaware of methods that don’t require them to endanger their lives or careers.

Earlier, postdoctoral scholars and young scientists may have been more wary than anything else of rubbing senior scientists the wrong way by, say, voicing concerns about a department or institute in the latter’s charge. Today, the biggest danger facing them is indefinite jail time, police brutality and avoidance by institutes that may wish to stay on the party’s good side. (And this is speaking only of the more privileged male scientists; others have only had it increasingly worse.)

Once again: how much can we ask of our compatriots? How much in particular can we ask of those who have reason to complain even as they stand to lose the most – the Dalits, the women, transgender people, the poor, the Adivasi, the non-English non-Hindi speakers, environmentalists, healthcare workers, migrant labourers, graveyard and crematorium operators, manual scavengers, the Muslims, Christians and members of other minority denominations, farmers and agricultural traders, cattle-rearers, and indeed just about anyone who is not male, rich, Brahmin? All of these people have stories worth sharing, but whose identities have been increasingly isolated, stigmatised and undermined. All of these people, including the young scientists as well, thus deserve to be quoted or published anonymously or pseudonymously – or their views may never be heard.

Paying the price of fiction

assorted color mask
Photo by hitesh choudhary on

There are limitations, of course, and this is where ethical and responsible journalism can help. It is hard to trust an anonymous Twitter user issuing scandalous statements about a celebrity, and even harder to trust an anonymous writer laying claim to the credibility that comes with identifying as a scientist yet making unsubstantiated claims about other scientists – as necessary as such a tactic may seem to be. The safest and most responsible way forward is for a ‘source’ to work with a journalist such that the journalist tells the story, with the source supplying one set of quotes. This way, the source’s account will enjoy the benefit of being located in a journalistic narrative, in the company of other viewpoints, before it is broadcast. The journalist’s fundamental role here is to rescue doubts about one’s rights from the grey areas it occupies in the overlap between India’s laws and the wider political context.

However, it is often also necessary to let scientists, researchers, professors, doctors, etc. to say what they need to themselves, so that they may bring to bear the full weight of their authority as well as the attitudes they don as topical experts. There is certainly a difference between writing about Pushpa Mittra Bhargava’s statements on one hand and allowing Pushpa Mittra Bhargava to express himself directly on the other. Another example, but which doesn’t appeal to celebrity culture (such as it is in the halls of science!), is to let a relatively unknown but surely qualified epidemiologist write a thousand words in the style and voice of their choice about, say, the BJP’s attempts to communalise the epidemic. The message here is contained within the article’s arguments as well as in the writer’s credentials – but again, not necessarily in the writer’s religious or ethnic identity. Or, as the case may be, in their identity as powerless young scientists.

Ultimately, the most defensible request for anonymity is the one backed by evidence of reasonable risk of injury – physical or otherwise – and the BJP government has been steadily increasing this risk since 2014. Then again, none of this means those who have already received licence to write anonymously or pseudonymously also have license to shoot their mouths. Journalists have a responsibility to be as selective as they reasonably can to identify those who deserve to have their names hidden – with at least two editors signing off on the request instead of the commissioning editor alone, for example – and those who are selected to be reminded that the protection they have received is only for the performance of a necessary duty. Anonymity or even pseudonymity introduces one fiction into the narrative, and all fictions, now matter how trivial, are antithetical to narratives that offer important knowledge but also a demonstration of what good journalism is capable of. So it is important to not see this device as a reason for the journalist to invent more excuses to leave out or obfuscate yet other details in the name of fear or privacy. In fact, the inclusion of one fiction should force every other detail in the narrative to be that much more self-evidently true.

Though some authors may not like it, the decision to grant anonymity must also be balanced with the importance and uniqueness of the article in question. While anonymity may grant a writer the freedom to not pull their punches, the privilege also foists more responsibility on the editor to ensure the privilege is being granted for something that is in the public interest as well as can’t be obtained through any other means. One particular nuance is important here: the author should convince the editor that they are compelled to speak up. Anonymity shouldn’t be the only reason the article is being written. Otherwise, anonymity or pseudonymity will easily be excuses to fire from behind the publication’s shoulders. This may seem like a crude calculus but it also lies firmly in the realm of due diligence.

We may not be able to ask too much of our compatriots, but it is necessary to make sure the threats that face them are real and that they will not attempt to gain unfair advantages. In addition, the language must at all points be civil and devoid of polemic; every claim and hypothesis must be substantiated to the extent possible; if the author has had email or telephone conversations with other people, the call records and reporting notes must be preserved; and the author can’t say anything substantial that does not require their identity to be hidden. The reporter or the editor should include in the article the specific reason as to why anonymity has been granted. Finally, the commissioning editor reserves the right to back out of the arrangement anytime they become unsure. This condition simply reflects the author’s responsibility to convince the editor of the need for anonymity, even if specific details may never make it to the copy.

At the same time, in times as fraught as ours, it may be unreasonable to expect reporters and editors to never make a mistake, even of the Rolling Stone’s proportions (although I admit the Columbia University report on Rolling Stone’s mistakes is unequivocal in its assessment that the magazine made no mistakes it couldn’t have avoided). The straightforward checks that journalists employ to weed out as many mistakes as possible can never be 100% perfect, particularly during a pandemic of a new virus. Some mistakes can be found out only in hindsight, such as when one needs to prove the negative, or when a journalist is caught between the views of two accomplished scientists and one realises a mistake only later.

Instead, we should expect those who make mistakes to be prompt, honest and reflexive, especially smaller organisations that can’t yet afford independent fact-checkers. A period in which anonymous authorship is becoming more necessary, irrespective of its ad hoc moral validity, ought also to be a period in which newsroom managers and editors treat mistakes not as cardinal sins but as opportunities to strengthen the compact with their readers. One simple first step is to acknowledge post-publication corrections and modifications with a note plus a timestamp. Because let’s face it – journalists are duty-bound to walk the same doubts, ambiguities and fears that also punctuate their stories.


Better nuclear fusion – thanks to math from biology

There’s an interesting new study, published on February 23, 2022, that discusses a way to make nuclear fusion devices called stellarators more efficient by applying equations used all the way away in systems biology.

The Wikipedia article about stellarators is surprisingly well-written; I’ve often found that I’ve had to bring my undergraduate engineering lessons to bear to understand the physics articles. Not here. Let me quote at length from the sections describing why physicists need stellarators, which also serves to explain how these machines work.

Heating a gas increases the energy of the particles within it, so by heating a gas into hundreds of millions of degrees, the majority of the particles within it reach the energy required to fuse. … Because the energy released by the fusion reaction is much greater than what it takes to start it, even a small number of reactions can heat surrounding fuel until it fuses as well. In 1944, Enrico Fermi calculated the deuterium-tritium reaction would be self-sustaining at about 50,000,000º C.

Materials heated beyond a few tens of thousand degrees ionize into their electrons and nuclei, producing a gas-like state of matter known as plasma. According to the ideal gas law, like any hot gas, plasma has an internal pressure and thus wants to expand. For a fusion reactor, the challenge is to keep the plasma contained. In a magnetic field, the electrons and nuclei orbit around the magnetic field lines, confining them to the area defined by the field.

A simple confinement system can be made by placing a tube inside the open core of a solenoid.

A solenoid is a wire in the shape of a spring. When an electric current is passed through the wire, it generates a magnetic field running through the centre.

The tube can be evacuated and then filled with the requisite gas and heated until it becomes a plasma. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates magnetic field lines running down the center of the tube, and the plasma particles orbit these lines, preventing their motion towards the sides. Unfortunately, this arrangement would not confine the plasma along the length of the tube, and the plasma would be free to flow out the ends.

The obvious solution to this problem is to bend the tube around into a torus (a ring or donut) shape.

A nuclear fusion reactor of this shape is called a tokamak.

Motion towards the sides remains constrained as before, and while the particles remain free to move along the lines, in this case, they will simply circulate around the long axis of the tube. But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings would be closer together on the inside than the outside. This would lead to an uneven field across the tube, and the fuel will slowly drift out of the center. Since the electrons and ions would drift in opposite directions, this would lead to a charge separation and electrostatic forces that would eventually overwhelm the magnetic force. Some additional force needs to counteract this drift, providing long-term confinement.

[Lyman] Spitzer’s key concept in the stellarator design is that the drift that Fermi noted could be canceled out through the physical arrangement of the vacuum tube. In a torus, particles on the inside edge of the tube, where the field was stronger, would drift up. … However, if the particle were made to alternate between the inside and outside of the tube, the drifts would alternate between up and down and would cancel out. The cancellation is not perfect, leaving some net drift, but basic calculations suggested drift would be lowered enough to confine plasma long enough to heat it sufficiently.

These calculations are not simple because this how a stellarator can look:

The coil system (blue), plasma (yellow) and a magnetic field line (green) at the Wendelstein 7-X plasma experiment under construction at the Max-Planck-Institut für Plasmaphysik, Greifswald, Germany. Credit: Max-Planck Institut für Plasmaphysik

When a stellarator is operating and nuclear fusion reactions are underway, impurities accumulate in the plasma. These include ions that have formed but which can’t fuse with other particles, and atoms that have entered the plasma from the reactor lining. These pollutants are typically found at the outer layer.

An additional device called a diverter is used to remove them. The heavy ions that form in the reactor plasma are also called ‘fusion ash’, and the diverter is the ashtray.

It works like a pencil sharpener. The graphite is the plasma and the blade is the diverter. It scrapes off the wood around the graphite until the latter is fully exposed and clean. But accomplishing this inside a stellarator is easier said than done.

In the image above, let’s isolate just the plasma (yellow stuff), slice a small section of it and look at it from the side. Depending on the shape of the stellarator, it will probably look like a vertical ellipse, an elongated egg – a blob, basically. By adjusting the magnetic field near the bottom of the stellarator, operators can change the shape of the plasma there to pinch off its bottom, making the overall shape more like an inverted droplet.

The shapes of an egg and an inverted droplet, laid side by side to compare.
The shapes of an egg and an inverted droplet. Note that the shapes are illustrative and aren’t exact representations of the shape of the plasma. Credit: Good Ware/Flaticon

At the bottom-most point, called the X-point, the magnetic field lines shaping the plasma intersect with each other. At least, some magnetic field lines intersect with each other while others move towards each other without fully criss-crossing, but which are in contact with the surface of the reactor. (In the image below, the boundary between these two layers of the plasma is called the separatrix.)

Diverter plates are installed near this crossover point to ‘drain’ the plasma moving along the non-intersecting fields.

An illustration showing the effect of the diverter coils on the plasma, the X-point, the layer of plasma that will be 'scraped off', and the diverter plates at the bottom – in the Joint European Torus, a plasma physics experiment at the Culham Centre for Fusion Energy, Oxfordshire.
An illustration showing the effect of the diverter coils on the plasma, the X-point, the layer of plasma that will be ‘scraped off’, and the diverter plates at the bottom – in the Joint European Torus, a plasma physics experiment at the Culham Centre for Fusion Energy, Oxfordshire. Credit: EUROfusion 2016/United Kingdom Atomic Energy Authority
Note the placement and shape of the diverter coils and their effect on the shape of the plasma, at the Joint European Torus. Credit: Focus On: JET/Matthew Banks, EFDA JET

In the new study, physicists addressed the problem of diverter overheating. The heat removed at the diverter is considered to be ‘waste’ and not a part of the fusion reactor’s output. The primary purpose here is to take away the impure plasma, so the cooler it is, the longer the diverter will be able to operate without replacement.

The researchers used the Large Helical Device in Gifu, Japan, to conduct their tests. It is the world’s second largest stellarator (the first is the Wendelstein 7-X). Their solution was to stop heating the plasma just before it hit the diverter plates, in order to allow the ions and electrons to recombine into atoms. The energy of the combined atom is lower than that of the free ions and electrons, so less heat reaches the diverter plates.

How to achieve this cooling? There were different options, but the physicists resorted to arranging additional magnetic coils around the stellarator such that, just before the plasma hit the diverter, its periphery would detach into a smaller blob that, being separated from the overall plasma, could cool. These smaller blobs are called magnetic islands.

When they ran tests with the Large Helical Device, they found that the diverter removed heat from the plasma chamber in short bursts, instead of continuously. They interpreted this to mean the magnetic islands didn’t exist in a steady state but attached and detached from the plasma at a regular frequency. The physicists also found that they could model the rate of attachment using the so-called predator-prey equations.

These are the famous Lotka-Volterra equations. They describe how the populations of two species – one predator and one prey – vary over time. Say we have a small ecosystem in which crows feed on worms. As they do, the crow population increases, but due to overfeeding, the population of worms dwindles. This forces the crow population to shrink as well. But once there are fewer crows around, the number of worms increases again, which then allows more crows to feed on worms and become more populous. And so the cycle goes.

A plot showing the varying populations of predator and prey, as predicted by the Lotka-Volterra equations.
A plot showing the varying populations of predator and prey, as predicted by the Lotka-Volterra equations. Plot: Ian Alexander and Krishnavedala/Wikimedia Commons, CC BY-SA 4.0

Similarly, the researchers found that the Lotka-Volterra equations (with some adjustments) could model the attachment frequency if they assumed the magnetic islands to be the predators and an electric current in the plasma to be the prey. This current is the product of electrons moving around in the plasma, which the authors call a “bootstrap current”.

When the strength of the bootstrap current increases, the magnetic island expands. At the same time, the confining magnetic field resists the expansion, forcing the current to dwindle. This allows the island to shrink as well, receding from the field. But then this allows the bootstrap current to increase once more to expand the island. And so the cycle goes.

Competitive relation between magnetic island and localised plasma current derived with the predator-prey model. Increased current (bottom left) enhances the magnetic island. In turn, electric resistivity increases, which reduces the current (bottom right). Eventually, the magnetic island shrinks, which leads to reduction of the electric receptivity and increase of the current. Caption and credit: National Institute for Fusion Science, Japan

The researchers reported in their paper that while they observed a frequency of 40 Hz (i.e. 40 times per second) in the Large Helical Device, the equations on paper predicted a frequency of around 20 Hz. However, they have interpreted to mean there is “qualitative agreement” between their idea and their observation. They also wrote that they expect the numbers to align once they fine-tune their math to account for various other specifics of the stellarator’s operation.

They eventually aim to find a way to control the attachment rate so that the diverters can operate for as long as possible – and at the same time take away as much ‘useless’ energy from the plasma as possible.

I also think that, ultimately, it’s a lovely union of physics, mathematics, biology and engineering. This is thanks in part to the Lotka-Volterra equations, which are a specific form of the Kolmogorov model. This is a framework of equations and principles that describes the evolution of a stochastic process in time. A stochastic process is simply one that depends on variables whose values change randomly.

In 1931, the Soviet mathematician Andrei Kolmogorov described two kinds of stochastic processes. In 1949, the Croatian-American mathematician William Feller described them thus:

… the “purely discontinuous” type of … process: in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical.

… a “purely continuous” process … there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small.

Kolmogorov derived a pair of ‘forward’ and ‘backward’ equations for each type of stochastic process, depending on the direction of evolution we need to understand. Together, these four equations have been adapted to a diverse array of fields and applications – including quantum mechanics, financial options and biochemical dynamics.

Featured image: Inside the Large Helical Device stellarator. Credit: Justin Ruckman, Infinite Machine/Wikimedia Commons, CC BY 2.0.

Scicomm Science

‘Aatmanirbharta through science’

The Week magazine distinguished itself last year by picking Indian Council of Medical Research chief Balram Bhargava as its ‘person of the year’ for 2021. And now, ahead of National Science Day tomorrow, The Week has conducted an “exclusive” interview with science minister Jitendra Singh. Long Small story short, it’s rubbish.

I discovered the term ‘Gish gallop’ in a 2013 blog post by David Gorsky, in which he wrote about the danger of acquiescing to cranks’ request for experts to debate them on a public stage. While such invitations may appear to legitimate experts to be an opportunity to settle the matter once and for all, it never works that way: the stage and the debate become platforms on which the cranks can spew their bullshit, in the name of having the right in the limited context of the event to do so, and use the inevitably imperfect rebuttal – limited by time and other resources – as a way to legitimise some or all of their claims. (Also read in this context: ‘No, I Will Not Debate You’.)

One particular tactic to which cranks resort in these circumstances is, Gorsky wrote, “to Gish gallop”: to flood their rhetoric with new terms, claims, arguments, etc. with little regard for their relevance or accuracy, in an effort to inundate their opponents with too many points on which to push back.

In their ‘interview’, with the help of kowtowing questions and zero push-back, The Week has allowed Jitendra Singh to Gish gallop. In this case, however, instead of Singh drawing credibility from his ‘opponent’ being an expert who couldn’t effectively refute his contentions, he derives his upper-hand from his interlocutor being a well-known, once-reputed magazine, and secretly from its (possibly enforced) supinity.

The penultimate question is the best, to me: “Yet, India’s good work gets shadowed by pseudoscience utterances. Somehow, your government has not been able to quieten the mumbo jumbo.” Dear interviewer, the government itself is the origin of a lot of the mumbo jumbo. Any question that isn’t founded on that truth will always ignore the problem, and will not elicit a solution.

Overall, the interview is a press release worded in the form of a Q&A, with a healthy chance that the opportunity to publish it was dangled in front of The Week in exchange for soft questions. Yet its headline may be accurate in a way the magazine didn’t intend: this government is going to achieve its mythical goal of perfect ‘Aatmanirbharta’ only by boring a hole through science, and reason and common sense.

Happy national science day!

Featured image: Jitendra Singh, May 2014. Photo edited (see original here). Credit: Press Information Bureau/GoI, GODL – India.


For colours, dunk clay in water

It’s not exactly that simple, but it’s still a relatively simple way to produce a lovely palette of colours. Researchers from Norway and Germany have reported that when a synthetic clay called Na-fluorohectorite is suspended in water, the material separates out in thin nanosheets – i.e. nanometre-thick layers of Na-fluorohectorite separated by water. And these sheets produce structural colours.

Colour is the frequency of light that we see, after other objects have absorbed all the other frequencies of light. For example, if you have a green-coloured bottle in front of you in a well-lit room, you see that it’s green because the bottle has absorbed all other frequencies in the visible light, leaving only the green frequency to reach your eyes. On the other hand, structural colours are produced when the structure of an object manipulates the incoming light to pronounce some frequencies and screen others.

When light enters between the Na-fluorohectorite layers in water, it bounces between the layers even as some beams of light interfere with other beams. The layer’s final colours are the colours that survive these interactions.

The amazing thing here is that class 10 physics allows you to glean some useful insights. As the researchers wrote in their paper, “The constructive interference of white light from individual nanosheets is described by the Bragg-Snell’s law”. The equation for this law:

2d(n2 − sin2θ)1/2 = mλ

d is the distance between the nanosheet layers. θ is the angle of observation of the layers. m is a constant. λ is the wavelength of the light “enhanced by constructive interference”, according to the paper.

When the colour visible changes according to the angle of observation, θ, the phenomenon is called iridescence. However, the researchers found that Na-fluorohectorite layers were non-iridescent, i.e. the colour of each layer looked the same from different angles of observation. They attributed this to bends and wrinkles in the nanosheets, and to turbostratic organisation: the layers are slightly rotated relative to each other.

Similarly, the effective refractive index of the light, interacting with two distinct materials, is given by this equation:

n = (n12Φ1 + n22Φ2)1/2

n1 is the refractive index of one material and Φ1 is the amount of that material in the overall setup (by volume). So also for n2 and Φ2.

Taking both equations together, by controlling the values of n and d, they researchers could control the colour of light that survives its interaction with the water-clay composite. In fact, as we’ll see later, the volume of clay suspended in the water is very low (around 1% at a time), so the effective refractive can be approximated to be the refractive index of water – around 1.33. So if n is fixed, the researchers would only have to change d – the distance between the – to change the structural colours that the clay produced!

Here’s a short video of the team’s efforts:

DOI: 10.1126/sciadv.abl8147

The researchers found that some white light still survives and dulls the colours on display, which is why they’ve used a dark substrate (in the background). It absorbs the white light, accentuating the other colours.

This is a simple workaround – but it’s also inefficient and limits the applications of their discovery. So they found another way. The researchers dunked Na-fluorohectorite in water along with atoms of caesium. Within “seconds to minutes”, the Na-fluorohectorite formed double sheets – two layers of Na-fluorohectorite sandwiched together by a thin layer of caesium atoms. And these double layers produced bright colours.

Difference in colour brightness between the single- and the double-layer nanosheets. DOI: 10.1126/sciadv.abl8147

The double layers form so rapidly because of a phenomenon called osmotic swelling. The surfaces of the Na-fluorohectorite single-layers are negatively charged. The caesium ion is positively charged, and gets attracted to these surfaces. If two layers, called L1 and L2, are closer to each other than to other layers, then the concentration of caesium ions between these two layers will be significantly higher than in the rest of the water. This prevents the water from entering the gap between L1 and L2, and allows them to practically stick to each other.

The percentages denote the amount of Na-fluorohectorite by volume. Ignore the orders. DOI: 10.1126/sciadv.abl8147

There’s more: the researchers also found that they could change the colours by adding or removing water. This is wonderfully simple, but also to be expected. The separation between two nanosheets – i.e. between L1 and L2 – is affected by the concentration of caesium ions in the water. So if you add more water, the concentration of ions drops, the separation increases and the colour changes.

An edited excerpt from the paper’s discussion section, on the findings’ implications:

Because of the sustainability and abundance of clay minerals, the present system carries considerable potential for upscaled applications in various areas ranging from pigments in cosmetics and health applications to windows and tiles. The results and understanding obtained here on synthetic clays should be transferred to natural clays, where vermiculite … presents itself as the most suitable candidate for upscaling the concept presented here. … our results could break new ground when embedding appropriate amounts of these clay nanolayers into transparent but otherwise mechanically weak matrices, providing structural coloration, mechanical strength, and tunable stability at the same time.

DOI: 10.1126/sciadv.abl8147

Featured image credit: Tim Mossholder/Pexels.

Analysis Scicomm Science

PTI, celebrating scientists, and class/caste

SpaceX announced a day or two ago that the crew of its upcoming Polaris Dawn mission will include a space operations engineer at the company named Anna Menon. As if on cue, PTI published a report on February 15 under the headline: “SpaceX engineer Anna Menon to be among crew of new space mission”. I’ve been a science journalist for almost a decade now and I’ve always seen PTI publish reports pegged on the fact that a scientist in the news for some reason has an Indian last name.

In my view, it’s always tricky to celebrate scientists for whatever they’ve done by starting from their nationality. Consider the case of Har Gobind Khorana, whose birth centenary we marked recently. Khorana was born in Multan in pre-independence India in 1922, and studied up to his master’s degree in the country until 1945. Around 1950, he returned to India for a brief period in search of a job. He didn’t succeed, but fortunately received a scholarship to return to the UK, where he had completed his PhD. After that Khorana was never based in India, and continued his work in the UK, Canada and the US.

He won a Nobel Prize in 1968, and India conferred him with the Padma Vibhushan in 1969, and India’s Department of Biotechnology floated a scholarship in his name in 2007 (together with the University of Wisconsin and the India-US S&T Forum). I’m glad to celebrate Khorana for his scientific work, or his reputation as a teacher, but how do I celebrate Khorana because he was born in India? Where is the celebration-worthy thing in that?

To compare, it’s easy for me to celebrate Satyendra Nath Bose for his science as well as his nationality because Bose studied and worked in India throughout his life (including at the University of Dhaka in the early 1920s), so his work is a reflection of his education in India and his struggles to succeed, such as they were, in India. An even better example here would be that of Meghnad Saha, who struggled professionally and financially to make his mark on stellar astrophysics. But Khorana completed a part of his studies in India and a part abroad and worked entirely abroad. When I celebrate his work because he was Indian, I’m participating in an exercise that has no meaning – or does in the limited, pernicious sense of one’s privileges.

The same goes for Anna Menon, and her partner Anil Menon, a flight surgeon whom NASA selected to be a part of its astronaut crew earlier this year. According to Anil’s Wikipedia page, he was in India for a year in 2000; other than that, he studied and worked in the US from start to today. I couldn’t find much about Anna’s background online, except that her last name before she got married to Anil in 2016 was Wilhelm, that she studied her fourth grade and completed her bachelor’s and master’s studies in the US, and that there is nothing other than her partner’s part Indian heritage (the other part is Ukrainian) to suggest she has a significant India connection.

So celebrating Anna Menon by sticking her name in a headline makes little sense. It’s not like PTI has been reporting on her work over time for it to single her out in the headline now. The agency should just have said “SpaceX announces astronaut crew for pioneering Polaris Dawn mission” or “With SpaceX draft, Anna Menon could beat her partner Anil to space”. There’s so much worth celebrating here, but gravitating towards the ‘Menon’ will lead you astray.

This in turn gives rise to a question about one’s means, and in turn one’s class/caste (historically as well as today, both the chance to leave the country to study, work and live abroad and the chance to conduct good work and have it noticed has typically accrued and accrues to upper-caste, upper-class peoples – Saha’s example again comes to mind; such chances have also been stacked against people of genders other than cis-male).

When we talk about a scientist who did good work in India, we automatically talk about the outcomes of privileges that they enjoy. Similarly, when we talk of a scientist doing good work in a different country, we also talk about implicit caste/class advantage in India, the country of origin, that allowed them to depart and advantages they subsequently came into at their destination.

But when we place people who are doing something noteworthy in the spotlight for no reason other than because they have Indian last names, we are celebrating nothing except this lopsided availability of paths to success (broadly defined) – without critiquing the implied barriers to finding similar success within India itself.

We need to think more critically about who we are celebrating and why: if there is no greater reason than that they have had a parent or a family rooted in India, the story must be dropped. If there is a greater reason, that should define the headline, the peg, etc. And if possible the author should also accommodate a comment or two about specific privileges not available to most scientists and which might have made the difference in this case.

This post benefited from valuable feedback from Jahnavi Sen.

Culture Scicomm

‘Steps in the right direction’ are not enough

This is a step in the right direction, and the government needs to do more.

You often read articles that have this sentence, typically authored by experts who are writing about some new initiative of the Indian government. These articles are very easy to find after the government has made a slew of announcements – such as during the Union budget presentation.

These articles have the following structure, on average: introducing the announcement, a brief description of what the announcement is about, comments about its desirability, and finally what the government should do to improve (often the bottom 50% of the article).

There was a time when such articles could have been understood to be suggestions to the government. Some news publications like The Hindu and Indian Express have traditionally prided themselves on counting influential lawmakers among the readers of their op-ed pages and editorials. But almost no one could think this is still the case, at least vis-à-vis the national government.

The one in power since 2014, headed by Prime Minister Narendra Modi, has always done only what it wants, frequently (and perhaps deliberately, if its actions during the COVID-19 pandemic are anything to go by) to the exclusion of expert advice. And this government has launched many schemes, programmes, missions, etc. that are steps in the right direction, and that’s it. They have almost never become better with time, and certainly not because bona fide experts demanded it.

Some examples: Ayushman BharatKISANSwachh BharatMudra Yojana and ‘Smart Cities’ (too many instances to cite). Most of these initiatives have been defined by lofty, even utopian, goals but lack the rigorous, accountable and integral implementation that these goals warrant. As such, the government’s PR and troll machineries simply spin the ministers’ announcements at the time they are made for media fodder, and move on.

To be sure, the government has some other initiatives it has worked hard to implement properly, such as ‘Make in India’ and the GST – a courtesy it has reserved for activities that contribute directly to industrialisation and economic growth, reflected in the fact that such growth has come in fits and starts, and has been limited to the richer.

So at this time, to laud “steps in the right direction” followed by suggestions to improve such initiatives is worse than a mistake: it is to flout an intentional ignorance of the government’s track record.

Instead, an article would be better if it didn’t give the government the benefit of the doubt, and criticised it for starting off on a weak note or for celebrating too soon.

Apart from making suggestions to the government, such articles have served another purpose: to alert their readers, the people, to what needs to happen for the initiatives in question to be deemed successful. So the experts writing them could also consider pegging their statements on this purpose – that is, communicating to their readers as to what components an initiative lacks and why, therefore, it would be premature to hope it will do good.