Climate change and the coastline paradox

A friend recently told me about a tool called climate.you that shows “temperature change, over land and sea”, at all points on the earth’s surface in a bid “to show how warming is already affecting people everywhere”. You can enter the name of your city or town and find out how the local conditions have changed. Based on interactions with some scientists who have written on climate modelling for The Wire and The Hindu, however, I’d also come to be wary of projections for scales smaller than a whole region, especially for a data-poor country like India. But after the chat, I also wondered if my position was outdated — and learnt that it was. So here goes an update.

Climate change is fundamentally global in its drivers but its effects operate across all scales, from continental changes in rainfall patterns down to local phenomena like coastal erosion. This said, a confusion about the phenomenon’s ability to operate at different scales often arises from the way scientists model it.

Global climate models collect data on atmospheric and oceanic parameters and simulate them on a grid whose cells are 50-200 km across, maybe more, which is a very coarse spatial resolution. When you render this grid on a screen, there’s a value for every pixel and, given the cell size, that pixel represents a regional average rather than a precise local forecast for the place at that pixel. But this doesn’t mean the model is wrong at that pixel, it just means it’s not designed to predict the consequences at that level.

(For example, if the RMC Chennai station says it’s 32 C right now, it’s harder to know how much the relative contributions of land-use, radiation from built structures, heat transported by local winds, and regional warming to that figure are. The temperature may also be sensitive to other factors we’ve deemed inconsequential, such as the amount of dust in the air around the station or traffic outside. A common way out of this seeming intractability, beyond quality control measures at the station itself, is to collect data for several years and check which temperature trends hold up and which ones fall away.)

The scale question is reminiscent of the coastline paradox: no well-defined landmass has a coast of well-defined length, yet the coast exists at all points along the edge of the landmass. This weirdness arises because the length depends on the scale at which you measure it. If you look at the India map zoomed out to 1:10,000,000 — like on Google Maps on your laptop screen — the coast shows some features but smooths over others because your laptop’s screen doesn’t have enough pixels to capture those smaller than a particular size. If you zoomed in further, say to 1:1,000,000, you’d find more features because there are more pixels now for a certain number of features, and smaller variations in the shape of the coast show up. If you zoomed in further, even smaller variations would show up, and so on.

Credit: Google Maps

Hat-tip to Sambavi P. for the fillip.


The climate signals that models are sensitive to are similarly, but partly, a function of the scale at which various instruments record those signals. In both cases, aggregating data from different scales to prepare a region-wide projection actually smooths over complexity rather than capturing it. There’s also no one ‘correct’ resolution and it’s not reasonable to expect a model prepared for one resolution to be equally accurate or certain at another.

This said, the analogy is only partly apt because it diverges over how geography and climate change behave at ever smaller scales. Specifically, while features on the coast become more numerous, and thus its length ever greater, as you keep zooming in, there are no signals relevant to climate change beyond a particular floor. Which means if climate change manifests as, say, a higher local tide level, for that parameter there’s no ‘zooming in’ beyond that point. Second, as you zoom in, climate signals become less messy whereas geographic signals become messier.

In fact, scientists have developed a technique called downscaling whereby they use a combination of statistical and dynamical methods to translate a model’s coarser outputs into finer projections. Obviously this isn’t a lossless exercise — you can’t get more information without paying a cost — and downscaling from one scale to the immediately next one ‘below’ adds some uncertainty. Which means a downscaled local projection carries the errors implicit in the global model plus the errors introduced by the downscaling method. Ultimately, the projection for a particular pixel exists: it’s just laden with uncertainty.

Now, while a model need not have a good projection for a particular pixel, that’s not synonymous with the data collected from that pixel being irrelevant or a non-signal for climate change. Local measurements like tide gauge records, weather station temperatures, regional snow measurements, etc. are all contain climate signals at very fine spatial scales. In other words being sceptical of a hyperlocal projection is reasonable but being sceptical of a local observation demands a higher bar.

For example, say a modeller feeds data about a city’s population density, road networks, and growth trends into a model of a city and tries to predict the congestion in your neighbourhood in one year. This effort is only going to be as good as the model’s assumptions. Now, say your neighbour leaves for work at 8 am every day for five years and tracks her commute time. This data has its own limitations — perhaps foremost that its patterns can’t be easily generalised — but while the model might excel at predicting how the city as a whole will change, your neighbour’s experience is a better predictor of how your neighbourhood in particular will.

In effect, it makes sense to be wary of sub-regional climate indicators derived purely from global model outputs without proper downscaling or local validation but to extend that cynicism towards observed data or even properly validated regional models would be to throw the baby out with the bathwater.

Posted in Scicomm, Science | Tagged , , , , , , , | Leave a comment

I, Head-bumper

Daily writing prompt
What is one word that describes you?

Tall.

I’m tall for India, around 6’3”. My height has recently been on my mind. India is not a good place for tall people. The public infrastructure is geared towards shorter people — the average Indian adult male and female are 5’5” and 4’8”, respectively — and the lack of space is most pronounced for me when I travel in buses: if I stand, I almost always bump my head into the upper railing when it drives over a bump, and have to keep my legs at obtuse angles to the seat in front when I sit. Plus there are the various seating areas and ceilings in government offices, banks, and subways, various doorways, and the occasional loose cable dangling from a tree, which is a threat to everyone but to which I’m more easily exposed because to get to my head it needs to dangle less.

The issue got on my mind recently because I’d been shopping for shoes. The average Indian adults are at least 10′ shorter, which means they have smaller feet as well, so most brands that sell shoes in India max out at size UK 10. I however have size UK 12 feet, with a wide toe box for added measure. Together with the availability of suitable designs (I prefer darker tones), that has more often than not meant Reebok for my floaters and ASICS for my shoes — neither of which is cheap. I used to take a loutish kind of pride in being able to intimidate some people in college with my height; I’m glad I’ve since given up that kind of thinking, and in small part because India reminds you that being taller in this country largely means you just bump your head more often.

My folks, of course, love the fact that I can reach for things they’ve stowed in the attic without needing a ladder.

Posted in Life notes | Tagged , , , | Leave a comment

Spotting fakes by looking at them

On March 10, the Supreme Court said a balance has to be struck between warding against misinformation online and protecting citizens’ right to free speech. The context was the Centre’s attempts to defend the 2023 IT Rules: when the comedian Kunal Kamra asked who would decide if online content is “fake or misleading”, the Centre said, “When we see it, we know it is fake”.

The case has been led by Kamra, the Editors Guild of India, and other petitioners and in its course the Bombay High Court and the Supreme Court have been asked to weigh the constitutionality of a “Fact-Check Unit” (FCU) mandated by the national government. The petitioners have argued that giving the government the power to flag “fake, false or misleading” information will have a chilling effect on free speech and that the provisions turn the state into a judge in its own cause. The Centre’s defence — “known it when we see it” — is, as both history and data science show, a recipe for disaster.

Solicitor general Tushar Mehta, who offered the defence on the Centre’s behalf, is likely to know that the line echoes a chaotic chapter in American legal history. In the 1964 case Jacobellis v. Ohio, US Supreme Court Justice Potter Stewart had to define obscenity. But frustrated by the lack of a precise legal definition, he famously wrote in his concurrence: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.”

The legacy of this little statement was a big mess. If a Supreme Court Justice couldn’t articulate a clear standard, it was clearly folly to expect local police and juries to do so — more so since what passed for art in Manhattan could lead to a prison sentence in rural Georgia. The immediate result was different federal circuit courts applying different tests, creating a patchwork of legal outcomes across the country. Inevitably, the situation descended into the absurd: throughout the late 1960s, US Supreme Court Justices regularly screened films to determine if they were “obscene” in a projection room at the Court, literally deciding the law based on their own instincts and physiological reactions.

By 1973, the Court realised this was unworkable. In Miller v. California, it established a three-part framework known since as the Miller test. It asked whether the average person, applying “contemporary community standards”, would find the work to be prurient; whether it depicts sexual conduct in a “patently offensive” way; and whether it lacks serious literary, artistic, political, or scientific value (a.k.a. the SLAPS test).

But since even this framework lacked a universal standard, the potential for harm persisted. For instance, because “community standards” were local, federal prosecutors in the 1980s and 1990s began a practice called jurisdiction shopping: they would carefully prosecute distributors in the most conservative parts of the country for material that was actually sold nationwide. The practice then forced businesses to calibrate their content to the most restrictive local market in the country in order to avoid jail time — a sort of regression to the most conservative position.

The “know it when I see it” heuristic ultimately became meaningless with the coming of the internet, which allowed content producers to be located in California even as their content is served in Alabama, thus confusing the notion of ‘community’ and the resulting community standards. Federal prosecutors were eventually forced to abandon most obscenity cases altogether and shift their focus to child exploitation, which is prohibited regardless of location or community.

India’s proposed FCU threatens to play through this same history of failures. And it will begin as a patchwork of censorship that will depend on who’s looking at a screen when a certain clip is playing.

But the fact is nobody has to know it just by seeing it. Data science and international regulations today offer testable ways to identify misinformation.

One option is automated fact-checking that uses large databases of verified information. Instead of an official simply declaring a claim false, a system can check whether the statement connects to any documented policy decisions or records. If a viral post claims that “the government has banned P”, the system can scan policy documents, gazette notifications, and other reliable databases to check whether such a decision appears anywhere. If no record exists, the claim can be flagged and labelled as unsupported. The machine need not be all that intelligent as the bigger point here is to ensure the verdict can be traced to evidence available in the public domain.

For instance, if a viral post claims “the government has banned P”, the algorithm will calculate the shortest ‘logical path’ between the nodes for “government”, “banned”, and “P” across all known policy documents. If no logical path exists, the system flags the information with a low truth-value score. This provides a quantifiable metric, moves the conversation from “I think this is fake” to “the data shows no factual connection for this claim”, and could even spare the people staffing censorship teams at social media companies considerable psychological harm. The government making the algorithm open-source — as it should be considering it will be in service of the public — will also add another layer of integrity.

Another option is to look at the European Union’s Digital Services Act, which — instead of deciding whether every individual post is true or false — has regulators ask whether a stream of information poses a systemic risk to public health, security or democratic debate. Platforms are then required to monitor patterns like how quickly a claim spreads and whether coordinated networks of accounts (e.g. bots) are pushing the same message. So the focus here is not on the content of a single post but to examine the behaviour of the information as it moves through the network itself.

The Centre’s current argument, however, ignores these tools and doubles down on a standard that has failed every time it has been applied, chiefly because it creates a legal landscape in which no one knows the rules until they have already broken them. Unless of course this is the Centre’s aim.

Posted in Analysis | Tagged , , , , , , , , , , , , | Leave a comment

The little things

Tungsten diboride (WB2) is extraordinarily stiff and resistant to deformation and scientists have long suspected it could be a superhard material, meaning it scores at least 40 gigapascal (GPa) on a hardness test. This is important because diamond, the hardest natural material on Earth, scores 70-100 GPa but because it is so expensive, industries often turn to cubic boron nitride (45-60 GPa) as a substitute in tools to cut metals and ceramics. Another superhard material in the repertoire could be a good thing. The problem is WB2 is also brittle: like glass, it is hard to scratch but easy to shatter. This is because its atoms are so strongly bonded to each other that the bulk crystal would sooner fracture than a few bonds yield. So Southern University of Science and Technology, Shenzhen, researchers doped WB2 with rhenium, a rare metal that has one more electron than the tungsten atom. This electron changes the way atoms pack together inside a crystal, coaxing the tungsten and rhenium atoms into leaving behind vacancies in the grid of atoms in a specific, repeating pattern. These vacancies were arranged in ordered pairs along particular planes inside the crystal that allowed the atoms to ‘glide’ past each other when they were stressed, effectively allowing the crystal to bend a little to absorb pressure rather than bottle it up and break catastrophically later. The team measured this version of WB2 to have a hardness of 40 GPa and for added measure the rhenium also increased the temperature at which the crystal rusted by 700° C.

By removing a few atoms the scientists effectively made the material a lot less brittle and better able to withstand heat. Little things like this are a reminder that what we know to be true at one scale or context does not necessarily hold true at all scales and contexts. And this is as true as an allegory as it is a scientific fact. Social media platforms as well as TV news in India are rife these days with unfounded speculation and unsubstantiated claims, many of which extrapolate from small pools of information without a modicum of good faith or introspection as to whether what we already know may not suffice to describe or explain the things happening around us. I am for people using AI models in some enterprises but, as with cryptocurrencies, most of their more visible users have pressed them in the service of newfangled Ponzi schemes and scams and, curiously, to mouth off about ‘revitalising’ physics research, so to speak, without stopping to think about what they do not know and, perhaps more importantly, the possibility that, to combine two famous lines associated with Richard Feynman and Freeman Dyson, there is always more room in all directions and more — as Philip Warren Anderson wrote in 1972 — is different. The BS is often manifest as accounts on X.com purporting to have ‘solved’ quantum gravity or resolving open questions in particle physics, which may be a scam as well insofar as they come off as efforts to privatise such research and have it enter the hype cycles of venture capitalists.

On a less dismaying note, new forms of organisation do not emerge because the fundamental laws of nature change but because, as with superhard tungsten diboride, groups of things can act together in ways that their individual members do not or because they have been exposed to new environments we have yet to encounter them in. There are other possibilities, too. For instance, these days I am regularly surprised by what scientists are finding out about things animals are capable of. Jane Goodall found chimpanzees use tools, and now it seems so can some fish, birds, and cows. I also understand that there are forms of emergence to be found in the study of societies, religions, history, and art. While it is obvious that the source of surprise always seems to be us not knowing enough while going in thinking we do, the prescient words of Anderson from that 1972 essay come to mind (let it be known that I will never tire of quoting him at length):

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.

The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other. That is, it seems to me that one may array the sciences roughly linearly in a hierarchy, according to the idea: The elementary entities of science X obey the laws of science Y.

The arrogance of the particle physicist and his intensive research may be behind us (the discoverer of the positron said “the rest is chemistry”), but we have yet to recover from that of some molecular biologists, who seem determined to try to reduce everything about the human organism to “only” chemistry, from the common cold and all mental disease to the religious instinct. Surely there are more levels of organization between human ethology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.

In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:

FITZGERALD: The rich are different from us.

HEMINGWAY: Yes, they have more money.

Posted in Culture, Scicomm, Science | Tagged , , , , , , , | Leave a comment

The pleasures of rewatching

Daily writing prompt
What movies or TV series have you watched more than 5 times?

I’m afraid the answer is F.R.I.E.N.D.S. My sister and I watched it growing up, then rewatched it, the re-rewatched it, and now then have it playing in the background as I work. I have some problems with it but I’ve realised that memories of watching the show are in my head intertwined with good times with my family simply because they happened very close to each other — the same evening, for example — so thinking of one has often meant thinking of the other. I suppose I also like that all its seasons end on a happy note, which sometimes strains credulity but I think that’s been a small price to pay, these days, for some laughter and the knowledge that it will all end well.

As for films: I’ve watched several more than five times, but the ones I’ve rewatched the most are compilations of the actors Vadivelu and Goundamani. I’m not sure how familiar people beyond India are with how comedy scenes exist in our films. Watch few a dozen or so featuring the best comic actors of Tamil cinema and you will realise they’re drop-ins — little skits or vignettes that an actor and his crew will have crafted to be connected loosely or strongly to the film’s narrative at large but which can often still be removed without much consequence.

This has also allowed studios, network operators, and others to compile scenes featuring a single actor across films into a single YouTube video, often a few hours long. And I’ve rewatched those featuring the talents of Vadivelu and Goundamani (separately) several hundred times. In fact, I’d wager there isn’t a scene featuring Vadivelu I haven’t watched, and that would likely go for most Tamilians. I also know his lines by heart and they haven’t become more boring with repetition. I only know the names of a few films, however, but in Vadivelu’s case everyone knows that doesn’t matter. His scenes often stand on their own.

Posted in Culture, Op-eds | Tagged , , , , , | Leave a comment

On mathematics and reputation

From ‘Formalizing the stability of the two Higgs doublet model potential into Lean: identifying an error in the literature’, uploaded to arXiv on March 9, 2026:

Firstly, we believe this to be the first time a non-trivial error in a research level physics paper has been identified through the process of formal verification. … Secondly, this was one of the first research-level papers where formalization was attempted, and it was not chosen with the intention of finding an error, but rather because we thought the process of formalization would be easy and the likelihood of an error was low. From this one could make the worrying extrapolation that there are many such errors in the physics literature. It is also a strong motive for making formal verification the gold standard for physics papers.

This is interesting because I haven’t heard of published theoretical physics papers being flawed in the same way I’ve read about such papers in, say, psychology or behavioural economics. I am of course extrapolating from my personally small knowledge base: theoretical physicists may find this surprising, although I haven’t seen signs of that. To the contrary in fact.

I recently found out more about Lean and formalisation (for my piece in The Hindu about mathematicians’ efforts to formalise and then verify Maryna Viazovska’s work that won her the Fields Medal in 2022). Formalisation is the task of translating ‘human’ proofs of maths problems in the language of a machine in great detail; the language here is Lean. Conlon’s point is that if Lean found errors in a relatively simple calculation, it’d be in for worse in the face of path integrals, which are the foundation of almost all modern quantum theory but also quite messy.

In the course of reading more about this, I came across a curious December 2024 post on the Xena Project (of “mathematicians learning Lean by doing”) blog by Imperial College in London professor of pure mathematics Kevin Buzzard. He wrote that when some mathematicians were using Lean to check some advanced maths proofs, Lean found that a particular step in a 1965 paper by a mathematician named Norbert Roby appeared to be wrong. It mattered because a branch of mathematics called crystalline cohomology, which had been built on Roby’s work and used by many mathematicians since the 1970s technically had a gap in its foundations.

But nobody thought the math was actually wrong, only that the written proof had a hole in it. However, in formal mathematics, a result being ‘probably fine’ isn’t good enough; as Buzzard put it, “you have to actually fix it” rather than rest on the idea that it’s fixable. At some point, the noted American mathematician Brian Conrad caught wind of the mistake and, after looking into it, found a fix: he knew that a different proof of the same result existed in the appendix of a book by Pierre Berthelot and Arthur Ogus. So a crisis was averted. But then came the twist: Buzzard later had lunch with Ogus and gleefully told him how his work had saved the day. Ogus’s response: “Oh, that appendix has several errors in it. But I think I know how to fix them.”

Buzzard also mentioned one defence of mathematical ideas that had problems at their foundations that struck me, too — that “crystalline cohomology has been used so much since the 1970s that if there were a problem with it, it would have come to light a long time ago” — but UniDistance Suisse mathematics professor David Loeffler countered it in the comments saying the risk is that mathematicians might be placing too much stock in the idea that something can be fixed when it really may not be that way, and could even be “collectively wrong in their estimation”. How could this be possible?

I’m learning the answers have to do with how mathematicians work together. For instance when they productively use some framework, like crystalline cohomology, for decades to generate papers and careers, they also create an enormous social pressure rooted in the idea that the framework works. They also assume — reasonably — that the framework has already survived the questions that could have revealed flaws in its foundations. But as Buzzard’s and Loeffler’s exchange shows, it’s likely that that question has already come along but there’s no guarantee. Moreover even if there’s a flaw it might show itself only in particular applications and the rest of the time the framework can appear to be ‘functioning’ as expected.

(This to me also speaks to the unsuitability of using buildings or similar structures in the real, physical world as metaphors to communicate their nature — at least not without also drawing on, say, the idea that those particular applications don’t stress the framework’s specific weak points.)

Then of course there’s the problem with how mathematicians build on each other’s work, which is also a problem with peer-review as well. The norm is to verify that argument B follows in valid ways from argument A and not whether argument A is itself valid, nor is it to derive argument B from scratch. This way an error in one old paper can spread through citations for many years, with each subsequent mathematician correctly reasoning from a flawed starting point.

This is reminiscent of the Schön scandal in the 2000s. The German physicist Jan Hendrik Schön fabricated data on organic superconductors and field-effect transistors but by the time his fraud came to light, other groups had begun building on his results, including designing experiments premised on his findings being real. So when Schön was ultimately exposed, a not insubstantial chunk of the field had to be unwound. For an even more dramatic example from history: the HeLa cells from Henrietta Lacks are extraordinarily robust and, it turned out, had silently contaminated a large fraction of cell cultures in labs worldwide from the 1950s onward. So researchers who believed they were studying prostate cancer cells, breast cancer cells or other lines were in many cases actually studying HeLa cells. But unlike the case with zombie citations, in both cases the ‘secondary’ scientists had no idea they were studying a wrong thing.

I used these examples because it seems the ways in which mathematicians fail could be the same ways in which scientists more broadly fail: due to common blindspots, foundational assumptions that nobody thinks to double-check (or which they assume others have checked), social structures that reward ‘progress’ more than other processes like auditing, and, overall, by underestimating the importance of social forces to the way scientists and mathematicians organise and share their work. Lean et al. are, or formalisation more broadly is, thus forcing mathematicians to look past these assumptions and, as the arXiv paper’s author Joseph Tooby-Smith wrote, that they’re already finding gaps in fundamental ideas suggests the foundations of rational inquiry may be somewhat less certain than a discipline’s reputation alone might imply.

Posted in Science | Leave a comment

Measuring science stories

Google News picks up on science stories that many outlets are covering. Its reasoning is that the more outlets publish a particular story, the more reader interest the story has. However, the flaw here is that news outlets don’t evaluate all kinds of science developments on an equal footing nor do they always focus on reader interest. (The latter is more so since news outlets often don’t select stories for reader interest; instead they select stories for the reasons described below, then work in the reader interest.)

Outlets focus on those that they can understand or which they can cover for a lower cost. The former is almost always a major development — which is rare in science, as research is fundamentally incremental — or a finding that has been misreported in a university press release or in fact at the journal itself, e.g. if the paper title is itself oversimplified.

The latter — findings that can be covered at a lower cost — are typically simple, whose significance or wonderfulness is easy to communicate, e.g. “astronomers produce the largest image ever taken of the heart of the Milky Way” or “with lunar missions looming, scientists grow chickpeas in ‘moon dirt’”.

Altogether, the science stories the press has focused on have systematically avoided more involved topics or ideas, those that can’t be communicated easily, and those that require some expense (e.g. the services of a veteran reporter or freelancer, of a graphics team, etc.). Since Google News, and Google Discover by extension — which is also driven by what readers are interested in — drive a lot of traffic to news websites, and page views and unique users remain the metrics of choice at these websites, Google News/Discover also aggregate and create a preference among publishers for relatively uncomplicated science stories and ideas.

Which means when we pursue stories that are complicated or interesting in a way that allows us to tell a unique story, we shouldn’t expect it to draw readers from Google News/Discover nor focus on page views or unique views. Instead, we’re better off focusing on readers’ average time on page.

Posted in Analysis, Scicomm | Tagged , , , , | Leave a comment

Review: ‘Decolonial Keywords’ (2026)

Everyone who knows me knows that my intellectual coordinates are defined by scientific ideas, even when they’re about sociology or the humanities. This is why I found a new book, Decolonial Keywords: South Asian Thoughts and Attitudes, edited by anthropologists Renny Thomas and Sasanka Perera, so compelling. The book has 30 chapters written by 33 people, each one exploring the oft-hidden colonial undertones of words in everyday Indian English, and by extension documenting how deceptively treacherous the task of decolonialising the things the words refer to is — and many of them intersect with science in practice.

Indeed my own entry point into this book was half my general interest in Renny’s work, which to an amateur historian of science like me has been constantly insightful, and half my long-standing frustrations with how India and the Indian state commemorate science. On the occasion of National Science Day, which is today, I had an op-ed published in The Hindu on February 26 on why decolonialising science in India also requires Indians to “de-Nobelise” science, including shedding their fondness for individual geniuses in favour of the collective labour that science actually needs to function. Excerpt:

The keywords … clarify what a de-Nobelised imagination of science, paralleling the decolonisation of science, would require. It would force India to ask how Indians produce the thing called ‘recognition’ — through discoveries and papers as much as by institutions that sort labour into celebrated and hidden.

National Science Day, then, should not simply reproduce a Nobel-shaped story about genius and external validation. It should become an annual day of discussion of what counts as science, including the work of technicians, field staff, nurses, lab attendants, data collectors, and others whose labour is essential to make new knowledge but is rarely commemorated.

Good scientific practice requires us to regularly recalibrate the instruments to make sure they haven’t become less precise. Language, Decolonial Keywords shows, is the same way and we need to constantly recalibrate it for the same reasons.

For example, a mind accustomed to scientists’ oft-universalist claims will find the book unsettling because of how consistently it exposes such universalism to be a hoax. In her chapter, Centre for the Study of Developing Societies political theorist Prathama Banerjee has explored the idea of “shunya”. The global history of mathematics celebrates this entity, commonly equated to the entity called zero, as India’s gift to the world — a numerical placeholder that liberated mathematics from physically counting objects and eventually making calculus and modern computing possible. But if you keep reading, you’ll find that “shunya” was originally a profound ontological concept in Buddhist philosophy, an expression of emptiness and the absence of a permanent ‘self’. And that when modern mathematics extracted the concept, it discarded the philosophical attachments, effectively stripping the word of its ability to critique social hierarchies like caste, which in fact banks on the illusion of a permanent ‘self’.

In addition to the book’s chapters on ‘jugaad’, ‘poromboke’, and ‘laboratory’, which I tried to explore in my piece, the same theme is also on display in the chapter on “Igu”, the shaman of the Idu Mishmi people in Arunachal Pradesh, especially the tension between Western scientific taxonomy and indigenous ecological networks, written by Ambika Aiyadurai and Razzeko Delley, and the chapter on “Adivasiyat” by Roshan Praveen Xalxo.

Under the gaze of either modern medicine or conservation biology, a shaman comes across as a psychological curiosity and indigenous land rights as a consequence of politics. However, as Aiyadurai, Delley, and Xalxo set out, the words “Igu” and “Adivasiyat” really recall a “multispecies world” or a “multibeing cosmos” — recalling the writing of anthropologist Anna Tsing in 2013 — where rivers and spirits participate in making and maintaining the ecological network. And we don’t have to abdicate the scientific method to recognise that these indigenous vocabularies offer a sophisticated and importantly localised understanding of an environmental balance that the technocratic and extractivist models of the modern Indian state are themselves abdicating.

My natural scepticism sometimes (and only sometimes) flares up when I find the word “decolonial” because too often these days, and almost always in certain political contexts, “decolonialising science” in the contemporary Indian context has become a Trojan horse for right-wing nativism, where mythological allegories are retrofitted as ‘ancient’ quantum physics and surgery. But to their credit, Thomas and Perera and the chapters’ various authors are acutely aware of and make honest attempts to sidestep this danger. For example Harshana Rambukwella’s chapter on “Chinthanaya”, the Sinhala term for “thought” or “indigenous epistemology”, is carefully to separate its origins as an anti-colonial concept from how the island country’s majoritarian nationalists weaponised it during the COVID-19 pandemic to push some medical professionals to promote one charlatan’s “divine syrup” as a cure.

Decolonial Keywords is a dense book steeped in the theoretical frameworks of history, sociology, anthropology, and linguistics. The chapters dealing with the literary nuances of medieval poetry and the exact etymological roots of regional dialects in particular require quite a bit of patience — but the intellectual payoff is guaranteed. It’s also nice to have critical work like Decolonial Keywords that presents morsels of analysis and perspectives on a variety of topics because in this field, it’s generally an entire book on a single topic.

Posted in Analysis, Culture, Science | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Would a quantum battery charge faster than a classical one?

A quantum battery is a system that stores energy and whose working parts are quantum systems, such as atoms, ions, spins, superconducting circuits or quantum dots, so the processes of storing and extracting energy are governed by quantum mechanics.

Imagine you have a row of toy boxes. Each box can either be empty or have one object inside (like a toy). Your job is to fill the boxes as fast as you can using a machine that can put toys into boxes.

When a qubit is in its low-energy state, it’s like an empty box. When it is in its higher-energy state, it’s like a full box. When all the qubits go from low- to high-energy,  the whole setup stores some energy.

Now, if you have many boxes, scientists have found that you can fill them faster using a quantum trick, instead of filling each box one by one.

Say you have 12 boxes on a table. You point at box 1 and put in a toy. Then you point at box 2 and put in a toy. And so on. You can try to do it quickly but you’re still basically doing one box at a time.

Scientists recently reported an experiment that this simple tale is a metaphor for. It consisted of 12 qubits (short for ‘quantum bits’ — the smallest logical pieces of a quantum computer).

The experiment’s point was to drive each qubit locally, i.e. each one gets its own little push. The study called this the classical baseline.

Now, imagine you have a different machine that, instead of filling one box at a time, fills two neighbouring boxes together in a single move.

So it does something like fill boxes 1 and 2 together, then fill boxes 2 and 3 together, then fill boxes 3 and 4 together, and so on.

The machine still isn’t filling all 12 at once but because it can create pairs of fills together, the filling can become more collective, like a wave of filling through the row.

In the experiment, the scientists used a special kind of interaction where two excitations are created together on neighbouring qubits. As a result these two neighbours tended to flip together, going from empty-empty to full-full.

(To achieve this, the team used a technique called parametric modulation.)

Now, say two kids, A and B, are filling boxes in these two different ways. 

You’re trying to check which kid fills all the boxes fastest.

If Kid B, who’s using the pair filling technique, only wins because their tool is stronger, that’s not interesting. The interesting claim is that even with fair tools, the pair-filling technique can store energy faster.

In a quantum device, not all stored energy is equally extractable as useful work. Instead the study uses a standard concept called ergotropy, which is the part of the energy that you can, in principle, extract as useful work with allowed operations.

For our metaphor, you can treat it as the amount of real charge you put in the boxes.

Then the scientists calculated the average charging power, i.e. how much useful energy got stored per unit time.

They did this for batteries of different sizes: 2 boxes, 3 boxes, … up to 12 boxes.

They found that the pair-filling, i.e. quantum, method could achieve more optimised charging power than the classical baseline and that the advantage tended to grow as the number of qubits increased.

They also reported that the optimal charging time window was very short, on the order of tenths of a microsecond.

This means Kid B has a short interval in which they can fill boxes very efficiently, and that interval stays around the same short length once there are several boxes.

But the scientists don’t just say their quantum way is faster. They also show that it’s faster for the reason they claim.

They measured the correlations between neighbours — i.e. whether excitations (or full boxes) appeared together more often than they’d expect if each box was independent.

In the classical way, they expected a neutral value, meaning no togetherness. For the quantum way, they expected more togetherness during the burst of charging.

They reported evidence consistent with the latter: in the quantum way, the neighbour-neighbour correlation indicator showed more paired behaviour in the same short window when the charging power peaked.

So where is the quantumness that provides this advantage?

In the normal world, a box is either empty or not empty. It’s one or the other.

In the quantum world, an object can also be in a special in-between condition — and not just because you don’t know what’s in the box. It’s a real physical kind of in-between that scientists call coherence.

When many quantum objects interact, they can also become linked in a way that makes their joint state not just the equivalent of ‘each box has its own toy’ but of ‘the whole set is described together’. This is called entanglement.

The study tried to show that during charging, the system wasn’t merely populating its excited states: it was also creating coherence and entanglement. A purely classical process can’t do this; quantumness must have been involved.

The scientists did this by measuring how many qubits were excited versus how many weren’t. Then they measured the total usable stored energy (ergotropy). Whatever was left after subtracting the plain part was the quantum-like part.

Finally, they checked whether the qubits were becoming entangled with each other, instead of acting independently. They did this by collecting measurement data, computing a quantity that has a clear rule, then saying from that whether the qubits could be entangled.

For instance, if the rule says no unlinked system can score above 10 on this test. So if the scientists measured 12, the system has to be entangled.

The scientists effectively showed that if they design the tool correctly and compare them fairly, the quantum way could flip more qubits than the classical way for up to 12 qubits, and not just by flipping the qubits one after the other faster.

So what’s the big deal?

I don’t know. It’s just fascinating.

The study was published in Physical Review Letters on February 9.

Featured image: A visual representation of the ‘quantum battery’ used in the study. It’s encoded in a 16-qubit lattice, 12 of which were activated for the experiment. Credit: arXiv:2602.08610v1.

Posted in Scicomm | Tagged , , , , , , | Leave a comment

Epstein or otherwise, Silicon Valley’s techno-elite know what they’re doing

The science writer Philip Ball has described “nerd tunnel vision” as the rationalisation scientists who maintained ties with Jeffrey Epstein after his 2008 conviction for soliciting underage sex offered, hinting at something more calculated than just oversight. “Nerd tunnel vision is a defining feature of much of the Edge discourse,” Ball wrote, referring to Epstein consort John’s Brockman’s salon for “Third Culture” intellectualism: “moral obtuseness; a determination to win the argument rather than to listen and ponder; a tendency to fabulate improbable futures from narrow ‘rational’ logic; ignorance of and contempt for other ways of seeing the world.”

Ball is in effect describing not a people that’s unaware or who failed to notice but a people who deliberately, with eyes wide open, chose what matters to them — from Lawrence Krauss, Marvin Minsky, and Robert Trivers to Joichi Ito and Peter Thiel. This isn’t naïveté so much as sophisticated actors making sophisticated calculations about what they can get away with.

As Epstein’s ties to more and more scientists, technologists, and venture capitalists has become apparent, there’s also a diagnosis doing the rounds that Silicon Valley’s techno-elites, a.k.a. the “tech-bros”, are simply mistaken in their embrace of topics purportedly close to Epstein’s heart, including transhumanism, longevity research, and what increasingly looks like repackaged eugenics. This diagnosis flatters us — the diagnosticians — by positioning us as the clear-eyed ones who saw the cautionary tales from a bygone era for what they were. But the science-bros and techno-libertarian elite (or TLE for short) saw them too, and proceeded to run a different cost-benefit analysis from what others did.

To see why, it’s important to see first that the story Silicon Valley tells about itself — of garage startups and disruption — obscures a more troubling, if also equally deliberate, genealogy. Computer scientist Timnit Gebru and philosopher Émile Torres coined the label ‘TESCREAL’ as a critical construct to describe an overlapping cluster of ideologies, many of which sank roots in the 1990s, that Silicon Valley embraces today: transhumanism, extropianism, singularitarianism, cosmism, (Bay Area internet) rationalism, effective altruism, and longtermism.

Extropianism and organised transhumanism have been adjacent to the Bay Area for a while, with newsletters, salons, and institutes linking human enhancement and “self-transformation” to an explicitly technologist ethos liberated from worrying about limits, whether material or social. This worldview fit neatly with a Valley culture already comfortable with narratives of radical innovation and libertarian politics. In the 2000s, singularitarian ideas also moved from niche futurist conclaves into mainstream tech discourse via high-profile evangelists and Silicon Valley institutions. The 2010s saw rationalist and effective altruist networks overlap with AI labs, venture capital, and philanthropy, specialising in translating moral philosophy and speculative technical futures into funding priorities and institutional agendas. By the 2020s, once frontier AI became the Valley’s central product, these once-semi-separate strands of thought started to resemble a unified milieu.

This setup also has a prehistory that further complicates any argument that TLEs have just been stumbling in the dark when they make choices we won’t. One part of the prehistory is the mid-20th-century scientific elitism that shaded into eugenics. William Shockley helped invent the first semiconductors and transistors and is a  foundational figure tied to Silicon Valley’s early industrial formation, and later became publicly associated with racist and eugenicist claims while at Stanford University. While Shockley’s views were on the fringe even then, transhumanism’s closest antecedent is Anglo-American eugenics (the term was first used by British eugenicist Julian Huxley). So when Nick Bostrom — a central figure in these movements and whose work at Oxford University’s Future of Humanity Institute has explicitly noted its pull on Silicon Valley elites and funders — was revealed in 2023 to have sent emails in 1996 stating his belief in racial differences in intelligence, should we treat this as an aberration or as a data point in a larger pattern?

A second wave of the prehistory, from the late 1980s to the 2000s, is the Silicon Valley’s own flavour of transhumanism, especially in the form of cryonics, what it called “morphological freedom”, and brain-computer futures. The Extropy Institute was an early node in this movement and its idioms fit Silicon Valley’s entrepreneurial culture of working without constraints. By the mid- to late-2000s, ‘singularity’ — the hypothetical moment in future when AI surpasses human intelligence and triggers rapid, uncontrollable technological growth — also became a popular rallying point. The third and final wave kicked on in the 2010s and is still surging: from just talking about living forever, the tech bros set up labs, biotech pipelines, and ecosystems for “consumer biohacking”, emblematised by Alphabet’s Calico Labs. In the 2020s, finally, conversations about reproduction and eugenics moved from being fringe rhetoric to ‘gray-zone’ products and venture-backed firms.

Today there are companies marketing expanded embryo screening not just for severe disease risks but for probabilistic traits, including — controversially — cognitive outcomes. One October 2024 investigation by The Guardian described a US startup selling embryo screening framed around gains in IQ and “liberal eugenics” concerns, essentially making dubious genetic advantages selectable for those who could pay. A November 2025 report in the Wall Street Journal described another San Francisco startup pursuing embryo gene-editing research despite legal prohibitions, backed by prominent tech investors and looking for permissive jurisdictions.

None of these are decisions born of being unaware, to be sure. The TLEs may believe what appears dystopian through the moral lens of the 2020s could become normalised or even celebrated by society of the 2040s. Or they understand these are cautionary tales but believe the aspects warranting caution are either exaggerated or can be managed with better execution. Many of these figures are also staunch materialists and techno-determinists who don’t harbour the humanistic assumptions underlying most science fiction writing. When Aldous Huxley warns them about a society engineering away suffering using pleasure, surveillance, control, and punishment, they may genuinely see that as solving a problem rather than creating one. This is because the caution depends on valuing things like struggle, authenticity, and inefficiency, of which they’re usually dismissive. Which is why Mark Zuckerberg spending $10 billion to attempt to create the ‘Metaverse’ isn’t a failure of imagination but the success of a different imagination.

Some TLEs also recognise exactly where this leads and view the resulting instability, disruption, and concentration of power as features rather than bugs because periods of chaos create opportunities for those positioned to profit from it. They might also believe that by being aware of the cautionary tales they’ve inoculated themselves against the specific failure modes the tales came with: “We’ve read 1984 so obviously we won’t make those mistakes.” This is of course hubris but importantly it’s not ignorance.

Which brings us back to Jeffrey Epstein and the scientists who orbited him: his connections, the TESCREAL ideologies, investments in longevity and embryo selection startups,  the pronatalist conferences, the eugenicist discourse — none of them was a separate issue. They’re just different manifestations of the same underlying orientation: to treat human ‘limits’ as engineering problems, then fund private bets to overcome them, with little regard for what social harms they accrue along the way.

Critics have also argued that these philosophies have encouraged TLEs to shift attention away from solving present humanitarian issues and towards speculative futures. This appeal works on Silicon Valley elites who fund institutes dedicated to this thinking because it allows them to frame their anxieties about death, intelligence, biological limits, and control as moral imperatives that transcend democratic deliberation. The longtermism of MacAskill of which Musk is so fond contextualises efforts in terms of billions of humans not yet born: how convenient, then, that those billions can’t vote, can’t organise, and can’t contradict the projections made on their behalf.

The pitfall in believing the TLEs are making the choices they are simply because they don’t know what we know is that it excuses us from confronting the possibility that they’ve concluded that the engineered, surveilled, controlled, stratified future they envision is in fact the entire point. Perhaps they’ve decided that when history is written by the posthuman victors, today’s cautionary tales will look like Luddite panic. Perhaps they’ve calculated that by the time the negative externalities become undeniable, they’ll already have captured enough of the gains to insulate themselves from the consequences. Or maybe they genuinely believe they’re doing good, that longer lifespans and enhanced intelligence and space colonies are moral imperatives, that anyone who can’t see this is simply thinking too small. In which case we’re not dealing with cynicism but with a totalising ideology that has convinced itself it holds the keys to human flourishing, and the fact that this ideology concentrates benefits among people who already have the most power is treated as a happy coincidence.

That doesn’t mean we must ban all life extension research or stop developing AI — but we should stop treating these as purely technical pursuits and instead recognise that every choice about what to study is also a choice about what kind of future we want and who gets to decide. We should insist that technological sovereignty isn’t just the capacity to build things but the capacity to deliberate together about whether we should build them at all. And, finally, we should stop giving these actors the benefit of the doubt. They’re not naïve. They’re not mistaken. They understand perfectly well what they’re building and they’re building it anyway.

Posted in Analysis, Culture | Tagged , , , , , , , , , , , , , , | Leave a comment