My political views

Daily writing prompt
How have your political views changed over time?

When I first had any views at all, I think I was in the second year of my engineering studies, in 2007, and decided I was a right-winger. Of course I understood very little of what that meant at the time, but I had some inkling. Among other things my and my fellow students’ education at the time repeatedly told us that the state was too slow at getting things done, that what was possible (according to engineers) was far ahead of what the people on the ground actually got, and that we had to prize private innovation. And of course that getting rich was a completely innocent and harmless desire. We neither had nor received much social grounding: work was work, and we excelled if we did as we were told, with the only difference between ourselves being how well we did those things.

The situation actually really changed for me when I joined the Asian College of Journalism in 2012, where the political environment was almost entirely the opposite, with leftist and more (in hindsight) progressive ideas doing the rounds. But more than the school itself, I made a friend there who put me in touch with Thomas and the local writers’ group he’d co-founded, Them Pretentious Basterds. They were a wonderful lot. More than persuade me to change my political beliefs, I credit them with teaching me to think intelligently about political issues. As with ACJ, they were also ideologically closely aligned but all of them were fervent debaters, too, on matters both trivial and significant. Since then, I think I haven’t associated myself with any one particular ideology. If you really pushed me, though, I’d probably say I’m a social democrat.

In case it matters, here are my positions in full:

1. Economically, I’m left of centre. I don’t trust markets more than the state. I wish to curb corporate power, tax the richest most, strengthen labour protections, and protect local industry. I do think privatisation can improve quality but not in education or healthcare.

2. I want state-guaranteed healthcare, largely state-funded education, and am okay with inequality only if there’s also perfect social mobility. I support cash transfers to the poor and heavy subsidies for essentials.

3. I want a neutral, non-privileging state on religion, reject legal enforcement of traditional gender/family roles, and support affirmative action, robust free speech with reasonable limits, strong protections for migrants, and full LGBTQ+ rights.

4. I believe traditions have some legitimate legal weight but only within a framework that’s otherwise very protective of individual rights.

5. I don’t want environmental rules to be relaxed to accommodate economic growth and I expect rich countries to do more. At the same time, I’m willing to prioritise better access to energy even if it means using more fossil fuels for now. I moderately support curbs on individual consumption. I reject carbon pricing as a tool of climate change mitigation.

6. I’m against harsh punishments and the death penalty. I think national security laws and pre-trial detention are overused. I’m very much in favour of protests and strongly opposed to internet or social-media shutdowns for the state to maintain ‘order’.

7. I’m somewhat open to expanded powers of surveillance. However, I want well-defined and well-articulated limits and strong civil rights. I’m wary of the state policing content.

8. I want courts and regulators to be able to block or reshape government decisions and prefer slow consensus over muscular leaders. I favour strict limits on how political parties can raise and use money, with complete and timely transparency. I want public media to be insulated from both state and corporate capture and support decentralisation to states or cities. I’m moderately confident that elections reflect the popular will.

9. I want international law and multilateral bodies to meaningfully constrain states. I favour heavy public investments in science and digital infrastructure and strong regulation of technology companies, especially on matters of user data and platform use.

10. I want intellectual property rights to be relaxed for medicines, green technologies, and basic knowledge goods.

All this said, I still fondly remember what a troll on Twitter once called me: “a Marxist in the garb of a science educator”.

Recently, when India’s prime minister’s office decided to feature the country’s new Chenab railway bridge in Jammu and Kashmir on its invitation cards for Independence Day 2025, I wrote in The Hindu about the political ideas embedded in the practice of (civil and mechanical) engineering, especially of the uncritical variety, i.e. one the country’s middle class has famously exercised for several decades now as a means of class mobility alone. In the process, however, right-wing ideologies have come to see in the profession a secular ideal exemplified by its exponents sticking to doing as they’re told.

Heat capacity

Someone asked me recently to name the thing I’ve been most grateful for in 2025.

After giving it some thought, I realised it had to be the heat capacity of water. And not just for 2025.

Tea is my warm beverage of choice, and my favourite version is with Tata tea leaves, lots of water, an ampoule of milk (just to absorb the tannins), crushed ginger, and a certain brand of chai masala.

The heat capacity of water is 4,184 joules per kilogram per centigrade.

This means you need to supply 4,184 joules to raise the temperature of 1 kg of water by 1º C. It’s why water takes quite a bit of time to come to a boil on the stove. Most of us just don’t notice because we rarely have to bother with bringing other things to a boil. If we did, we’d probably see them become hotter much faster.

Things with a lower heat capacity include coconut oil, neon, aluminium, diamond, and uranium. (Please don’t try boiling any of them at home.)

All that water makes a big difference to when I can have my tea.

I can brew it when I feel like, pour it into a mug and close it with a plate, finish my shower, dry off, and come pick it up. It’ll still be just as hot.

I can feel its fervent warmth seep slowly into my palms when I hold the mug on a cold morning.

I can make a mugful and savour it over half an hour as I think, and it’ll be almost at its hottest throughout.

Of course, anything else with that much water — including coffee — will take its time cooling down. But for me that kind of heat and persistence have become synonymous with tea. I don’t find it with anything else I consume as regularly.

My tea lasts a long time. It waits for me to finish making my point between sips. It doesn’t interrupt.

It reminds me of my current pair of jeans pants. They’re 13 years old. I bought them when I graduated from journalism school. Save for a small tear at the bottom of the right leg, they’re in perfect condition. I used the pair before them for six years.

Like these jeans, my tea reminds me to consume well, efficiently, to make sure things last for as long as they can be made to be.

It allows me to make my point, yes, but it also teaches me to take it slow, to think things through.

For 2026, I wish us both lots of good tea.

Joel Mokyr, Gita Chadha, Lawrence Krauss, Joseph Vijay

All that thinking about Joel Mokyr and his prescription to support society’s intellectual elite in order to ensure technological progress took me back to a talk Gita Chadha delivered in 2020, and to a dilemma I’d had at the time involving Lawrence Krauss. Chadha’s proposed resolution to it could in fact settle another matter I’ve been considering of late, involving Tamil actor-politician Joseph Vijay.

But first a recap. Gita Chadha is a sociologist and author, a professor at Azim Premji University, and an honorary senior fellow at the NCBS Archives. Her 2020 talk was titled ‘Exploring the idea of ‘Scientific Genius’ and its consequences’.

Lawrence M. Krauss is a cosmologist, sceptic, and author, former chair of the Board of Sponsors of the Bulletin of the Atomic Scientists, an alleged sexual predator, and a known associate (and defender) of convicted child-sex offender Jeffrey Epstein. In 2020 he was a year away from publishing a book entitled The Physics of Climate Change, combining two topics of great interest to me, but I wasn’t sure if I should read it. On the one hand there was the wisdom about separating the scholarship from the scholar but on the other I didn’t want to fill out Krauss’s wallet — or that of his publisher, who was trading on Krauss’s reputation — nor heighten the relevance of his book.

Finally, I’ve been thinking about Vijay more than I might’ve been if not for my friend, who’s a fan of Vijay the actor but not the politician. Whenever I express displeasure over her support for Vijay’s acting, she asks me to separate his films from Vijay the person, and politician. I haven’t been convinced. Is Vijay a good actor? I, like my friend, think so. My opinion of Vijay the politician declined however following the crowding disaster in Karur on September 27: while his party’s cadre whipped up a crowd whose size greatly exceeded what the location could safely hold, Vijay made a bad situation worse by first insisting on conducting a roadshow and then arriving late.

Today, I firmly believe separating the work from the person doesn’t make sense when the work itself produces the person’s power.

I first realised this when contemplating Krauss. When I asked during her talk about how we can separate the scholarship from the scholar, Chadha among other things said, “We need to critically start engaging with how the social location of a scholar impacts the kind of work that they do.” Her point was that, rather than consider whether knowledge remains usable once the person who originated it is revealed to have been unethical, we must remember prestige is never innocent: because it changes what institutions and audiences are prepared to excuse.

Broadly speaking, when society puts specific academics “on pedestals”, their eminence and the grant money they bring in become ways to excuse their harm. This is how people like Krauss were able to conduct themselves the way they did. Their work wasn’t just their contribution to the scientific knowledge of all humankind; it was also the reason for their universities to close ranks around them, in ways that the individuals also condoned, until the allegations became too inconvenient to ignore. The scholar benefited from what the scholarship was and the scholarship benefited from who the scholar was.

So a good response isn’t to pretend that it’s possible to cleanly separate the art from the artist but to pay attention to how the work builds social capital for the individual and to keep the individual — and the institutions within which they operate — from wielding that capital as a shield. Thus we must scrutinise Krauss, we must scrutinise his defenders, and we must ask ourselves why we uphold his scholarship above that of others.

(Note: We don’t have to read Krauss’s books, however. This is different from, say, the fact that we have to use Feynman diagrams in theoretical physics even as Richard Feynman was a misogynist and a creep. It doesn’t have to be one at the expense of the other; it can, and perhaps should, be both. I myself eventually decided to not read Krauss’s book: not because he defended Epstein but because I wanted to spend that time and attention on something completely new to me. I asked some friends for recommendations and that’s how I read When the Whales Leave by Yuri Rytkheu.)

The same rationale also clarified the problem I’d had with my friend’s suggestion that I separate Vijay’s work as an actor from Vijay himself. For starters: sure, an actor can play a role well and thus be deemed a good actor, but I think the sort of roles they pick to the exclusion of others ought to matter just as well to their reputation. And the parts he’s picked to play over the last decade or so have all been those of preachy alpha-males touting conservative views of women’s reproductive rights, male attitudes towards women, and retributive justice, among others. It’s also no coincidence that these morals genuflect smoothly to the pro-populist parts of his political messaging.

Similarly, Vijay’s alpha-male roles that I dislike aren’t just fictions: they’re part of the public persona that Vijay has deliberately converted into his newfound political authority. Once a ‘star’ enters electoral politics, “watching for entertainment” is hard to separate from participating in and enabling a machinery that’s generating legitimacy for the ‘star’. The tickets sold, the number of streams, the rallies attended, and the number of fans mobilised all help to manufacture the claim that Vijay has a mandate at all. As with Krauss, participation increased, and continues to increase, material power and relevance as well as paves the way to claim and probably receive immunity from the consequences of inflicting social harm.

But where the case of Vijay diverges from that of Krauss is that the former presents much less of a dilemma. When a person goes from cinema to electoral politics, separating their work from their personal identity is practically indefensible because the political leader himself is the vehicle of the power that he has cultivated through his film work. That is to say, the art and the artist are the same entity because the art fuels the artist’s social standing and the artist’s social standing fuels his particular kind of art.

Mokyr hearts Nobel Prizes

I don’t like Joel Mokyr’s history of progress and have written about that before. I also have a longer analysis and explanation of my issues coming soon in The Hindu. On December 8 I got more occasion to critique his thinking over his Nobel lecture in Stockholm, after receiving the prize that applied to his inchoate history of European Enlightenment a sheen of credibility I (and others) don’t think it deserves. In his lecture (transcribed in full here), Mokyr said:

I have argued at great length that these four conditions [incentives for elite innovators; a competitive “market for ideas”; talented people having the freedom to go where they like; a somewhat ‘activist state’] held increasingly in Europe between 1450 and 1750, the three centuries leading up to the industrial revolution. That is the kind of environment that led increasingly to incredibly creative innovative people: Baruch Spinoza, David Hume, James Watt, Adam Smith, Antoine Lavoisier, and Leonhard Euler, [something indecipherable], Ludwig van Beethoven. These are all people who came up with brand new ideas in an environment that supported them, if not perfectly at least far better than anything in the past.

The question is: do these conditions hold today? I would put it this way: the incentives in propositional knowledge in science are still there and they’re stronger and larger and more pervasive than ever. The market for ideas today provides unprecedented rewards and incentives to successful intellectual innovators, particularly in science. We have hundreds of thousands of people who work in intellectual endeavours, most of them (but not all) in universities. So what we do is we offer them what most of these people need more than anything else, which is financial security, which is tenure and of course in research institutions you get things like named chairs. Then we have rewards and of course there’s a whole pyramid of rewards at which the Nobel Prize and the Abel Prize presumably stand at the very peak, but there are many many other rewards, memberships in academies, and prizes for the best papers and the best books, and honorary degrees.

The funny thing is these incentives are cheap relative to the benefits that these people bestow on humankind, and that is I think a critical thing. Of course, in addition to all that there’s name recognition, fame through mass media, and, perhaps most important, these things lead to peer recognition: many academics really want to be respected by their peers, by other people like themselves, and of course in addition for a very few you know there’s lots of money to be made if they work in the right fields and get it right. Now, most of these incentives were already noticeable in about 1500, but in some ways the 20th century has done far better than anybody before.

In short Joel Mokyr treats rewards like the Nobel Prize to be a low-cost “pyramid” of incentives that helps the “upper tail” of society generate ideas. However the Nobel Prizes aren’t only incentives for innovation: they also exemplify how modern societies manage credit and legitimacy, which are examples of social relations, in ways that shape what innovation looks like and who benefits. The irony here is thus that the Nobel Prizes are part of the same system of social relations he underplays in his theory of progress — and a good example of his blindspot vis-à-vis the history of Europe’s progress.

According to Mokyr, ideas originate in the minds of an intellectual elite — his “upper tail” of society — and society’s job is to reward them. The diffusion of ideas is secondary in his framing. However, scientists, social science scholars, and historians of science have all critiqued the fact that the Nobel Prizes systematically individualise what’s almost always distributed work and sideline the science labour of laboratory managers, technicians, makers of instruments, graduate students, maintenance staff, supply chains, and of course state procurement. The Prizes advance a picture of “elite incentives” working to advance science when in fact it’s predicated equally, if not more, on questions of status and hierarchy, particularly on admission, patronage, language, funding, and geopolitics.

And in their turn the Prizes impose inefficiencies of their own. As we’ve seen before with the story of Brian Keating, they can reorganise research agendas — and more broadly they fetishise problems in science that can be solved and easily be verified to have been solved by a small number of people ‘first’.

Later in his lecture Mokyr further says:

… when technology changes, institutions have to adapt in various ways, and the problem is that they are usually slow to adapt. It takes decades until various parliamentary committees and political forces agree that some form of regulation or some form of control is necessary. And what evolutionary theory suggests is that adaptation to a changing environment is quite possible provided the changes in the technological environment are not too fast and not too abrupt. A sudden discontinuous shock will lead to mass extinctions and catastrophe; we know this is true from evolutionary history.

So there’s some concern that the acceleration in the rate of technological change in recent decades cannot be matched by institutional adaptation. What’s more, the acceleration implied by the last 10 years’ advances in AI and similar fields [suggests where] this is going to be a problem. Certain technological inventions have led to political polarisations, through social media for instance. This is something we haven’t fully solved and it’s not clear that we will be able to.

TL;DR: technology often advances faster than institutions adapt and politics, misinformation, nationalism, and xenophobia threaten the conditions for progress.

But then wouldn’t this mean then that incentives for the elites are the easy part? That is, Nobel Prizes or other incentives like them don’t fix these problems, in fact they may even distract from them by implying the main thing is to simply keep “geniuses” motivated.

Tracking down energetic cosmic rays

Once in a while, nature runs experiments that no human lab can match. Ultra-high-energy cosmic rays are a good example.

Cosmic rays are fast, high energy particles from space that strike Earth’s atmosphere with energies far beyond what even the most powerful particle accelerators can routinely create. Most particles are the nuclei of atoms such as hydrogen or helium and a smaller fraction are electrons.

When a cosmic-ray particle hits the atmosphere, it triggers a large ‘shower’ of particles spread over many square-kilometres. (One such shower may have caused a serious error in a computer onboard an Airbus A320 aircraft on October 30, causing it to suddenly drop 100 feet, injuring several people onboard.) By observing and measuring these showers, scientists hope to probe two big questions at once: what kinds of cosmic events accelerate matter to such extreme energies, and what happens to particle interactions at energies we can’t otherwise test.

The Pierre Auger Observatory in Argentina is one of the world’s main instruments for this work. Its latest results, published in Physical Review Letters on December 9, focus on a curiously simple idea: does the energy spectrum of these cosmic rays — i.e. how many particles arrive at each energy — look the same from every direction of the sky?

In the new study, the Pierre Auger Collaboration analysed data with three notable features. First, the Observatory recorded the spectrum above 2.5 EeV. One EeV is 1018 electron-volt (eV), a unit of energy applied to subatomic particles. Second, it recorded this spectrum across a wide declination range, from +44.8º to −90º. In this range, +90º is the celestial north pole and –90º is the celestial south pole. And third, the analysis included around 3.1 lakh events collected between 2004 and 2022.

The direction in which a cosmic ray is coming from carries information about its origins. If, say, a handful of galaxies or starburst regions are responsible for the highest energy cosmic rays, then the spectrum should show a bump in that part from the sky, and nowhere else.

Understanding the direction from which the most powerful cosmic rays come has become more important since the Collaboration found evidence before that these directions aren’t perfectly uniform. Above 8 EeV, for instance, the Observatory has reported a modest but clear imbalance across the sky. It has also sensed a similarly modest link between extremely energetic cosmic rays and specific parts of space.

Against this backdrop, the new study is part of a larger effort to elevate ultra-high-energy cosmic rays from a curiosity into a way to map the location of ‘cosmic accelerators’ in the nearby universe.

This is like the trajectory that neutrino astronomy has taken as well. For much of the 20th century, neutrinos frustrated physicists because they knew the particles were the universe’s second-most abundant, yet they remain extremely difficult to catch and study. Because neutrinos carry no electric charge and interact only weakly with matter, they pass through stars and magnetic fields almost untouched, making them unusually honest witnesses to violent places in the universe. However, physicists could take advantage of that only if they could build suitable detectors. And step by step that’s what happened. Today, experiments like IceCube in Antarctica realise neutrino astronomy: a way to study the universe using neutrinos. (Francis Halzen’s long push for this detector is why he’s been awarded the APS Medal for 2026.)

Cosmic rays stand on the cusp of a similar opportunity. To this end, the Pierre Auger Collaboration had to find whether the spectrum is dependent or independent of direction. A direction-independent spectrum would push the field towards models in which different types of cosmic sources produce high-energy cosmic rays; a direction-dependent spectrum would do the opposite.

The new result was firm: across declinations from −90° to +44.8°, the team didn’t find a meaningful change in the spectrum’s shape.

Cosmic ray researchers read the energy spectrum as a sort of forensic record. Over many decades, experiments have shown that the spectrum doesn’t increase smoothly. If you plotted the energy on the graph, in other words, you wouldn’t see a smooth curve. Instead you’d see the curve bending and changing how steep it is in places (see image below). These bumps reflect changes in the sources of cosmic rays, in the chemical makeup of the particles (protons v. nuclei), and how cosmic rays lose energy as they travel intergalactic space before reaching Earth.

The Collaboration’s new paper framed its findings in the context of two recent developments.

First, above 8 EeV, cosmic rays aren’t arriving in perfectly random directions around Earth. Instead roughly one half of the sky supplies 6% more such cosmic rays. Physicists have interpreted this to mean the distribution is being shaped by large-scale structure in the nearby universe, by magnetic fields or by both.

Second, the Collaboration previously identified evidence for an ‘instep’ feature near 10 EeV.

A plot of the cosmic ray flux (y-axis) versus the particles’ energy. The red dots show the rough ‘locations’ of the knee, ankle, and instep, from top to bottom in that order. Credit: Sven Lafebre (CC BY-SA)

If you look at the energy curve (shown above), you’ll notice a shift in slope at three points. Top to bottom, they’re called the ‘knee’, the ‘ankle’, and the ‘instep’. At each of these points, physicists believe, the set of physical effects producing the cosmic rays changes.

(LHAASO, a large detector array in China built to catch the particle showers made by very energetic gamma rays, recently reported signs that microquasar jets — where a stellar-mass black hole pulls in gas from a companion star and emits fast beams of radiation — could be accelerating cosmic rays to near the knee part of the spectrum.)

A spectrum whose shape changes depending on the direction is a way to connect these two aspects. If the instep is due to a small number of nearby, unusually strong sources, you might expect it to show up more strongly in the part of the sky where those sources are located. If the instep is a generic feature produced by many broadly similar sources, it should appear in essentially the same way across the sky (after accounting for the modest unevenness of the dipole). In the new study, the Collaboration tried to  make that distinction sharper.

The Pierre Auger Observatory detects showers of particles in the atmosphere with an array of water tanks spread over a large area. Showers arriving near-vertically and those arriving at an angle closer to the horizon behave differently because Earth’s magnetic field distorts their paths. So the analysis used different methods to reconstruct the cosmic rays based on their showers for angles up to 60º (vertical) and from 60º to 80º (inclined). Scientists inferred the energy in the rays based on the properties of the particles in the shower.

In the declination range −90º and +44.8º, the Observatory found the spectrum didn’t vary significantly with declination, with or without the dipole. In other words, once the Collaboration accounted for the disuniform intensity in the sky, the spectrum’s shape didn’t change depending on the direction.

The second major result was the ‘instep’, with the findings reinforcing previous evidence for this feature.

Now, if the instep was mainly caused by a small number of nearby sources, it would be reasonable to expect the spectrum would change depending on the declination. But the study found the spectrum to be indistinguishable across declinations. This in turn, per the Collaboration, disfavours the possibility of a small number of nearby sources contributing ultra-high-energy cosmic rays. Instead, the spectrum’s key features could be set by many sources and/or effects acting together.

The paper also suggested that the spectrum steepening near the instep could be due to a change in what the cosmic rays are made of: from lighter nuclei like helium to heavier nuclei like carbon and oxygen. If this bend in the curve is really due to a change in the cosmic rays’ composition, rather than in their sources, then cosmic rays coming from all directions should have this feature. And this is what the Pierre Auger Collaboration has reported: the spectrum’s shape doesn’t change by direction.

According to the paper’s authors, because the spectrum looks the same in different parts of the sky, the next clues to cracking the mystery of cosmic rays’ origins need to come from tests that measure their composition more clearly, helping to explain the instep.

Featured image: A surface detector tank of the Pierre Auger Observatory in 2007. Credit: Public domain.

Acting and speaking

Daily writing prompt
Have you ever performed on stage or given a speech?

I performed several times on stage in school, up to when I was 11 years old. I studies for around four years at a school in Tumkur where drama was part of the curriculum. Every year we’d have a playwright come over with a story, usually involving kings and conflict, and they’d cast us for various roles and have us rehearse for months. I was almost always one of the ‘extras’ but I didn’t mind: it was just good to be a part of it. We also got to do a lot of craft work in the process as we prepared the props and materials for the set, often involving bamboo, after school hours. Finally we’d put up a show for the whole town to watch. (Tumkur was a town then!) I still remember we had a particularly great time the year Velu Saravanan directed us. There was a lot of laughter, even if I don’t remember what the story was. It was a wonderful period.

I generally don’t like giving speeches and talks because I’m not convinced that that’s the most useful way to share what I know with others and vice versa. I’ve received quite a few opportunities and invitations in my professional years. On the few occasions I’ve accepted, I’ve insisted on being seated with the ‘audience’ and converting the chance to an AMA, where anyone can ask me anything. That way I can be sure what I say is useful for at least a few people.

So you use AI to write…

You’re probably using AI to write. Both ChatGPT and Google AI Studio prefer to construct their sentences in specific and characteristic ways and anyone who’s been a commissioning editor for at least a few years will find the signs of its influence hard to miss, even if you personally think it’s undetectable.

You’re probably going to continue using AI to write. There’s nothing I can do about that. In fact, as AI models improve, my ability to prove you used AI to write some text will only decline. And as a commissioning editor focusing articles about science, health, and environment research, I’m also not entirely at liberty to stop working with a writer if I notice AI-written text in their reports. I can only warn them. If I threaten to stop, I’m certain that one day I’ll have to impose the ultimate sanction, and after that I’ll have to look for new writers from a pool that’s already quite small. That’s a bad proposition for me.

To be sure, I realise I’m in a difficult position. People, especially those without a good grasp of English writing as well as those without the means to attain that grasp, should have the opportunity to be understood as precisely as they wish to be among English-speaking readers. If immaculate writing and grammar also allow their opinions to reach audiences that shunned them before for being too difficult to understand, all the more reason, no?

This has been something of an issue with the way The New Yorker wields its grammar rulebook, with its commas, hyphens, accents, and umlauts in just the right places. As the author Kyle Paoletta disputed in a 2017 critique in The Baffler:

For The New Yorker, a copy editor’s responsibility to avoid altering the substance of a writer’s prose is twisted into an utter disinterest in content writ large. … Content must be subordinated—thoughtfully, of course!—to the grammatical superstructure applied to it. Not only does this attitude treat the reader as somewhat dim, it allows the copy editor to establish a position of privilege over the writer. …

[Famed NYer copy-editor Mary] Norris frets over whether or not some of James Salter’s signature descriptive formulations (a “stunning, wide smile,” a “thin, burgundy dress”) rely on misused commas. When she solicits an explanation, he answers, “I sometimes ignore the rules about commas… Punctuation is for clarity and also emphasis, but I also feel that, if the writing warrants it, punctuation can contribute to the music and rhythm of the sentences.” Norris begrudgingly accepts this defence, but apparently only because a writer of no lesser stature than Salter is making it. Even in defeat, Norris, as the tribune of The New Yorker’s style, is cast as a grammatical arbiter that must be appealed to by even the most legendary writers.

I shouldn’t stand in judgment of when and how a writer wishes to wield the English language as they understand and have adopted it. But with AI in the picture, that could also mean trusting the writer to a degree that also overlooks whether they’ve used AI. Put another way: if it’s levelling the playing field for you, it’s getting rid of another sign of authenticity for me.

Perhaps the larger point is that as long as you make sure your own writing improves, we’re on the right track. To be clear: whose English should be improved? I’m saying that’s the contributor’s call. Because if they send a poorly written piece that makes good points, some of what I’m going to be doing to their piece is what AI will be as well (and tools like Grammarly have already been). Second, as AI use becomes more sophisticated, and I become less able to tell original and AI-composed copies apart, I’ll just have to accept at some point that there’s nothing I can do. In that context, all I can say is: “Use it to improve your English at least”.

Or perhaps, as in the words of Northern Illinois University professor David J. Gunkel, “LLMs may well signal the end of the author, but this isn’t a loss to be lamented. … these machines can be liberating: they free both writers and readers from the authoritarian control and influence of this thing we call the ‘author’. For better or for worse, however, I don’t see how a journalistic publication can adopt this line wholesale.

* * *

As a commissioning editor in science journalism specifically, I’ve come across many people who have useful and clever things to say but their English isn’t good enough. And from where I’m sitting, using AI to make their ideas clearer in English only seems like a good thing… That’s why I say it levels the playing field, and when you do that in general, the people who already have a skill because they trained to acquire it lose their advantages while the new entrants lose their disadvantages. The latter is of course unfair.

In the same vein, while I know AI tools have been enormously useful for data journalists, asking a programmer what they feel about that would likely elicit the same sort of reaction I and many of my peers have vis-à-vis people writing well without having studied and/or trained to do so. That would make the data journalists’ use problematic as well by virtue of depriving a trained programmer of their advantages in the job market, no?

Short of the world’s Big AI companies coming together and deciding they’re going to insert their products’ outputs with some kind of inalienable watermark, we have to deal with the inevitability of our being unable to tell AI-made versus human-made content apart.

Yes, AI is hurting what it means to be human, at least in terms of our creative expression. But another way to frame all this is that the AI tools we have represent the near-total of human knowledge collected on the internet. Isn’t it wonderful that we can dip into that pool and come away as more skilled, more knowledgeable, and perhaps better persons ourselves?

Science communication, as it focuses on communicating what scientists have found, might become redundant and low-value, at least among those that can afford these subscriptions (whose prices will also fall and accessibility will increase) because people can feed a model a paper and ask for the output. Hallucinations will always be there but it’s possible they could become less likely. (And it seems to me to be somewhat precarious to found our own futures as journalists on the persistence of hallucinations.) Instead where we can cut ahead, and perhaps stay that way, is by focusing on a journalism of new ideas, arguments, conversations, etc. And there the playing field will also be less about whether you can write better and more about what and how you’ve learnt to think about the world and its people.

We all agree democratisation is good. Earlier, and to a large degree still, education facilitated and facilitates that to a significant degree. Now the entry of AI models is short-circuiting that route, and it’s presenting some of us with significant disadvantages. Yes, there are many employers who will try to take advantage of this state to bleed their workforce and take the short way out. But it remains that — setting aside the AI models coming up with BS and equivocating on highly politicised issues — the models are also democratising knowledge. Our challenge is to square that fact (which can’t be un-facted in future) with preserving the advantages for those that ‘deserve’ it.

As if mirroring the question about who gets to decide who should improve their English, who gets to decide who gets to have advantages?

AI slop clears peer-review

Here’s an image from a paper that was published by Nature Scientific Reports on November 19 and retracted on December 5:

This paper made it through peer review at the journal. Let that sink in for a moment. Perhaps the reviewers wanted to stick it to the editors. Then again how the image made its way past the editors is also a mystery.

Nature Scientific Reports has had several problems before, enumerated on its Wikipedia page. It’s a ‘megajournal’ in the vein of PLOS One and follows the gold OA model, with an article processing charge of “£2190.00/$2690.00/€2390.00”.

Worlds between theory and experiment

Once Isaac Newton showed that a single gravitational law plus his rules of dynamics could reproduce the orbits of planets that Johannes Kepler had predicted, explain tides on Earth, and predict that a comet that had passed by once would return again, physicists considered Newtonian mechanics and gravitation to have been completely validated. After these successful tests, they didn’t wait to test every other possible prediction of Newton’s ideas before they considered them to be legitimate.

When Jean Perrin and others carefully measured Brownian motion and extracted Avogadro’s number in the early 20th century, they helped cement the science of the kinetic theory of gases and statistical mechanics that Ludwig Boltzmann and Josiah Willard Gibbs had developed. As with Newtonian mechanics, physicists didn’t also require every single consequence of kinetic theory to be rechecked from scratch. They considered them all to be fully and equally legitimate then on.

Similarly, in 1886-1889, Heinrich Hertz produced and detected electromagnetic waves in the laboratory, measured their speed and other physical properties, and showed that they behaved exactly as James Clerk Maxwell had predicted based on his (famous) equation. Hertz’s experiments didn’t test every possible configuration of charges and fields that Maxwell’s equations allowed, yet what they did test and confirm sufficed to convince all physicists that Maxwell’s theory could be treated as the correct classical theory of electromagnetism.

In all these cases, a theory won broad acceptance after scientists validated only a small (yet robust) subset of its predictions. They didn’t have to validate every single prediction in distinct experiments.

However, there are many ideas in high-energy particle physics that, even as they are derived from other theoretical constructs that have been tested to extreme precision, physicists insist on testing them anew as well. Why are they going to this trouble now?

“High-energy particle physics” is a four-word label for something you’ve likely already heard of: the physics of the search for the subatomic particles like the Higgs boson and the efforts to identify their properties.

In this enterprise, many scientific ideas follow from theories that have been validated by very large amounts of experimental data. Yet physicists want to test them at every single step because of the way such theories are built and the way unknown effects can hide inside their structures.

The overarching theory that governs particle physicists is called, simply, the Standard Model. It’s a quantum field theory, i.e. a theory that combines the precepts of quantum mechanics and special relativity*. Because the Standard Model is set up in this way, it makes predictions about the relations between different observable quantities, e.g. the mass of a subatomic particle called the W boson with a parameter that’s related to the decay of other particles called muons. Some of these relations connect measured quantities with others that have not yet been probed, e.g. the mass of the muon with the rate at which Higgs bosons decay to pairs of muons. (Yes, it’s all convoluted.) These ‘extra’ relations often depend on assumptions that go beyond the domains that experiments have already explored. New particles and new interactions between them can change particular parts of the structure while leaving other parts nearly unchanged.

(* Quantum field theory gives physicists a single, internally consistent framework in which they can impose both the rules of quantum theory and the requirements of special relativity, such as that information or matter can’t travel faster than light and that our spacetime conserves energy and momentum together, for example. However, quantum field theory does not unify quantum theory with general relativity; that’s the monumental and still unfinished purpose of the quantum gravity problem.)

For a more intricate example, consider the gauge sector of the Standard Model, i.e. the parts of the Model involving the gluons, W and Z bosons, and photons, their properties, and their interactions with other particles. The gauge sector has been thoroughly tested in experiments and is well-understood. Now, the gauge sector also interacts with the Higgs sector, and the Higgs sector interacts with other sectors. The result is new possibilities involving the properties of the Higgs boson, their implications for the gauge sector, and so on that — even if physicists have tested the gauge sector — need to be tested separately. The reason is that none of these possibilities follow directly from the basic principles of the gauge sector.

The search for ‘new physics’ also drives this attitude. ‘New physics’ refers to measurable entities and physical phenomena that lie beyond what the Standard Model can currently describe. For instance, most physicists believe a substance called dark matter exists (in order to explain some anomalous observations about the universe), but they haven’t been able to confirm what kind of particles dark matter is made of. One popular proposal is that dark matter is made of hitherto unknown entities called weakly interacting massive particles (WIMPs). The Standard Model in its contemporary form doesn’t have room for WIMPs, so the search for WIMPs is a search for new physics.

Physicists have also proposed many ways to ‘extend’ the Standard Model to accommodate new kinds of particles that ‘repair’ the cracks in reality left by the existing crop of particles. Some of these extensions predict changes to the Model that are most pronounced in sectors that are currently poorly pinned down by existing data. This means even a sizeable deviation from the Model’s structure in this sector would still be compatible with all current measurements. This is another important reason physicists want to collect more data and with ever-greater precision.

Earlier experience also plays an important role. Physicists may make some assumptions because they seem safe in some year but new data collected in the next two decades might reveal that they were mistaken. For instance, physicists believed neutrinos didn’t have mass, like photons, because that idea was consistent with many existing datasets. Yet dedicated experiments contradicted their belief (and won their performers the 2015 physics Nobel Prize).

(Aside: High-energy particle physics uses large machines called particle colliders to coerce subatomic particles into configurations where they interact with each other, then collect data of those interactions. Operating these instruments demands hundreds of people working together, using sophisticated technologies and substantial computing resources. Because the instruments are so expensive, these collaborations aim to collect as much data as possible, then maximise the amount of information they extract from each dataset.)

Thus, when a theory like the Standard Model predicts a specific process, that process becomes a thing to test. But even if the prediction seems simple or obvious, actually measuring it can still rule out whole families of rival theories offering to explain the same process. It also sharpens physicists’ estimates of the theory’s basic parameters, which then makes other predictions more precise and helps plan the next round of experiments. This is why, in high-energy physics, even predictions that follow from other, well-tested parts of a theory are expected to face experimental tests of their own. Each successful test can reduce the space for new physics to hide in — or in fact could reveal it.

A study published in Physical Review Letters on December 3 showcases a new and apt example of testing predictions made by a theory some of whose other parts have already survived testing. Tests at the Large Hadron Collider (LHC) — the world’s largest, most powerful particle collider — had until recently only weakly constrained the Higgs boson’s interaction with second-generation leptons (a particle type that includes muons). The new study provides strong, direct evidence for this coupling and significantly narrows that gap.

The LHC operates by accelerating two beams of protons in opposite directions to nearly the speed of light and smashing them head on. Its operation is divided into segments called ‘runs’. Between runs, the collaboration that manages the machine conducts maintenance and repair work and, sometimes, upgrades its detectors.

One of the LHC’s most prominent detectors is named ATLAS. To probe the interactions between Higgs bosons and leptons, the ATLAS collaboration collected and analysed data from the LHC’s run 2 and run 3. The motivation was to obtain direct evidence for Higgs bosons’ coupling to muons and to measure its strength. And in the December 3 paper, the collaboration reported that the coupling parameters were consistent with the Standard Model’s predictions.

So that’s one more patch of the Standard Model that has passed a test, and one more door to ‘new physics’ that has closed a little more.


Featured image: A view of the Large Hadron Collider inside its tunnel. Credit: CERN.

Robbing NISAR to pay ISRO

A.K. Anil Kumar, the director of ISRO’s Telemetry, Tracking, and Command Network (a.k.a. ISTRAC), has reportedly made some seriously misleading comments as part of his convocation address at a Maharishi University in Lucknow.* Kumar’s speech begins at the 1:38:10 mark in this video (hat-tip to Pradx):

A poorly written article in The Free Press Journal (which I couldn’t find online) has amplified Kumar’s claims without understanding that the two satellites Kumar was seemingly talking about are actually one: the NASA-ISRO Synthetic Aperture Radar (NISAR), developed jointly by the US and Indian space agencies. The article carries an image of NISAR but doesn’t caption it as such.

The article makes several dubious claims:

  • That the “satellite” can forecast earthquakes,
  • That NISAR can capture subsurface images of Earth, including of underground formations,
  • That India’s “satellite” didn’t require a 12-metre-long antenna the way NASA’s “satellite” did, and
  • That ISRO’s “satellite” was built at one-tenth of the cost of NASA’s “satellite”

To be clear, an ISRO satellite that can forecast earthquakes or image subsurface features and which the organisation built and launched for Rs 1,000 crore does not exist. What actually exists is NISAR, a part of which ISRO built. The claims are almost spiteful because they purport to come from a senior ISRO official whose work likely benefited from the ISRO-NASA collaboration and because he ought to have known better than to mislead.

NISAR is a dual-frequency (DF) synthetic aperture radar (SAR). The ‘DF’ bit means the satellite captures data on two radar frequencies, L-band and S-band. To quote from a piece I wrote for The Hindu on July 27:

At the time the two space organisations agreed to build NISAR, NASA and ISRO decided each body would contribute equivalent‑scale hardware, expertise, and funding. … [ISRO] supplied the I‑3K spacecraft bus, the platform that houses the controls to handle command and data, propulsion, and attitude, plus 4 kW of solar power. The same package also included the entire S‑band radar electronics, a high‑rate Ka‑band telecom subsystem, and a gimballed high‑gain antenna.

‘SAR’ refers to a remote-sensing technique in which a small antenna moves along a path while using a computer to combine the data it captures along the way, thus mimicking a much larger antenna. NISAR uses a 12-metre mesh antenna plus a reflector for this purpose. Both the S-band and L-band radars use it to perform their functions. As a result of using the SAR technique, the two radars onboard NISAR are able to produce high-resolution images of Earth’s surface irrespective of daylight or cloud cover and support studies of ground deformation, ice sheets, forests, and the oceans.

In this regard, for The Free Press Journal to claim NISAR “didn’t require India to install a separate 12-metre antenna, unlike NASA” gives the impression that ISRO’s S-band radar didn’t need the antenna. This is wrong: it does need the antenna. That NASA was the agency to build and deploy it on NISAR comes down to the terms of the collaboration agreement, which specified that ISRO would provide the spacecraft bus, the S-band radar (and its attendant components), and the launch vehicle while NASA would take care of everything else. This is the same reason why ISRO’s contributions to NISAR amounted to around Rs 980 crore — which Kumar rounded up to Rs 1,000 crore — whereas NASA’s cost was around Rs 10,000 crore.

The antenna is in fact an engineering marvel. Without it ISRO’s S-band radar wouldn’t be so performant and thus its data wouldn’t be so useful for decision-making for both research and disaster management. On the day ISRO launched NISAR, on July 30 this year, I got to interview Karen  St. Germain, the director of the Earth Science Division at the Science Mission Directorate at NASA. Here’s an excerpt from the interview about the antenna:

Both the L-band and the S-band radars use the same reflector. Since S-band has a shorter wavelength than the L-band, does this create any trade-offs in either L-band or S-band performance?

It doesn’t. And the reason for that is because this is a synthetic aperture radar. It creates its spatial resolution as it moves along. Each radar is taking snapshots as it moves along. You know, to get this kind of centimetre level fidelity and the kind of spatial resolution we’re achieving, if you were to use a solid antenna, it would have to be five miles long. Just like when you’re talking about a camera, if you want to be able to get high fidelity, you need a big lens. Same idea. But we can’t deploy an antenna that big. So what we do is we build up image after image after image to get that resolution. And because of this technique, it’s actually independent of wavelength. It works the same for S- and for L-bands. The only thing that’s a little different is because the antenna feeds for the L-band and the S-band can’t physically occupy the same space, they have to be next to each other and that means there’s a slight difference in the way their pulses reflect off the antenna. There’s that positioning difference, and that we can correct for.

Could you tell us a little bit more about that slight difference?

Karen St. Germain:
It’s the way a reflector works. You would ideally want to put the feed at the focal point of the reflector. But when you have two feeds, you can’t do that. So they’re slightly offset. That means they illuminate the reflector just slightly differently. The alignment is just a little bit different. The team optimised the design to minimise that difference and to make it so that they could correct it in post-processing.

And even for all these abilities, we (i.e. people everywhere) currently don’t know enough to be able to forecast earthquakes. What we can do today is make short-term predictions and we can prepare probabilistic forecasts over a longer period of time. That is, for instance, we can say “there’s a 20% chance of a quake of magnitude 8 or more occurring in the Himalaya in the next century” and we have the means to alert people in an area tens of seconds before an earthquake occurs. We can’t say “there will be an earthquake in Chennai at 3 pm tomorrow”.

The question for The Free Press Journal is thus what role a satellite can essay in this landscape. In a statement in 2021, ISRO had said “NISAR would provide a means of disentangling highly spatial and temporally complex processes ranging from ecosystem disturbances to ice sheet collapses and natural hazards including earthquakes, tsunamis, volcanoes and landslides.” This means NISAR will help scientists better piece together the intricate processes implicated in earthquakes — processes that are distributed over some area and happen over some time. Neither NISAR nor the S-band radar alone can forecast earthquakes.

On a related note, the L-band (1,000-2,000 MHz) and S-band (2,000-4,000 MHz) radar frequencies do overlap with the frequencies used in ground-penetrating radar (10-4,000 MHz). However, the lower the frequency, the further underground an electromagnetic wave can penetrate (while keeping the resolution fixed). Scientists have documented a ceiling of around 100 MHz for deep geological profiling, which is far from either of NISAR’s radars. Even the L-band radar, which has lower frequency than the S-band, can at best penetrate a few metres underground if the surface is extremely dry, like in a desert, or if the surroundings are made of water ice. What both radars can penetrate very well is cloud cover, heavy rain, and vegetation.

The ISRO + NASA collaboration that built NISAR was a wonderful thing that the agencies need to replicate in future even as it continued their less formalised engagements from before and whose benefits both host countries, India and the USA, continue to accrue in the satellite observation and remote-sensing domains. For Kumar to call the cost component into question in the way that he did, followed by the The Free Press Journal‘s shoddy coverage of his remarks, does no favours to the prospect of space literacy in the country.

* I updated this post at 7.45 pm on December 2, 2025, to make it clear that all but one of the objectionable claims were made by The Free Press Journal in its article; the exception was the cost comparison, which Kumar did make.