Tungsten diboride (WB2) is extraordinarily stiff and resistant to deformation and scientists have long suspected it could be a superhard material, meaning it scores at least 40 gigapascal (GPa) on a hardness test. This is important because diamond, the hardest natural material on Earth, scores 70-100 GPa but because it is so expensive, industries often turn to cubic boron nitride (45-60 GPa) as a substitute in tools to cut metals and ceramics. Another superhard material in the repertoire could be a good thing. The problem is WB2 is also brittle: like glass, it is hard to scratch but easy to shatter. This is because its atoms are so strongly bonded to each other that the bulk crystal would sooner fracture than a few bonds yield. So Southern University of Science and Technology in Shenzhen doped WB2 with rhenium, a rare metal that has one more electron than the tungsten atom. This electron changes the way atoms pack together inside a crystal, coaxing the tungsten and rhenium atoms into leaving behind vacancies in the grid of atoms in a specific, repeating pattern. These vacancies were arranged in ordered pairs along particular planes inside the crystal that allowed the atoms to ‘glide’ past each other when they were stressed, effectively allowing the crystal to bend a little to absorb pressure rather than bottle it up and break catastrophically later. The team measured this version of WB2 to have a hardness of 40 GPa and for added measure the rhenium also increased the temperature at which the crystal rusted by 700° C.
By removing a few atoms the scientists effectively made the material a lot less brittle and better able to withstand heat. Little things like this are a reminder that what we know to be true at one scale or context does not necessarily hold true at all scales and contexts. And this is as true as an allegory as it is a scientific fact. Social media platforms as well as TV news in India are rife these days with unfounded speculation and unsubstantiated claims, many of which extrapolate from small pools of information without a modicum of good faith or introspection as to whether what we already know may not suffice to describe or explain the things happening around us. I am for people using AI models in some enterprises but, as with cryptocurrencies, most of their more visible users have pressed them in the service of newfangled Ponzi schemes and scams and, curiously, to mouth off about ‘revitalising’ physics research, so to speak, without stopping to think about what they do not know and, perhaps more importantly, the possibility that, to combine two famous lines associated with Richard Feynman and Freeman Dyson, there is always more room in all directions and more — as Philip Warren Anderson wrote in 1972 — is different. The BS is often manifest as accounts on X.com purporting to have ‘solved’ quantum gravity or resolving open questions in particle physics, which may be a scam as well insofar as they come off as efforts to privatise such research and have it enter the hype cycles of venture capitalists.
On a less dismaying note, new forms of organisation do not emerge because the fundamental laws of nature change but because, as with superhard tungsten diboride, groups of things can act together in ways that their individual members do not or because they have been exposed to new environments we have yet to encounter them in. There are other possibilities, too. For instance, these days I am regularly surprised by what scientists are finding out about things animals are capable of. Jane Goodall found chimpanzees use tools, and now it seems so can some fish, birds, and cows. I also understand that there are forms of emergence to be found in the study of societies, religions, history, and art. While it is obvious that the source of surprise always seems to be us not knowing enough while going in thinking we do, the prescient words of Anderson from that 1972 essay come to mind (let it be known that I will never tire of quoting him at length):
The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.
The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other. That is, it seems to me that one may array the sciences roughly linearly in a hierarchy, according to the idea: The elementary entities of science X obey the laws of science Y.
…
The arrogance of the particle physicist and his intensive research may be behind us (the discoverer of the positron said “the rest is chemistry”), but we have yet to recover from that of some molecular biologists, who seem determined to try to reduce everything about the human organism to “only” chemistry, from the common cold and all mental disease to the religious instinct. Surely there are more levels of organization between human ethology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.
In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:
I’m afraid the answer is F.R.I.E.N.D.S. My sister and I watched it growing up, then rewatched it, the re-rewatched it, and now then have it playing in the background as I work. I have some problems with it but I’ve realised that memories of watching the show are in my head intertwined with good times with my family simply because they happened very close to each other — the same evening, for example — so thinking of one has often meant thinking of the other. I suppose I also like that all its seasons end on a happy note, which sometimes strains credulity but I think that’s been a small price to pay, these days, for some laughter and the knowledge that it will all end well.
As for films: I’ve watched several more than five times, but the ones I’ve rewatched the most are compilations of the actors Vadivelu and Goundamani. I’m not sure how familiar people beyond India are with how comedy scenes exist in our films. Watch few a dozen or so featuring the best comic actors of Tamil cinema and you will realise they’re drop-ins — little skits or vignettes that an actor and his crew will have crafted to be connected loosely or strongly to the film’s narrative at large but which can often still be removed without much consequence.
This has also allowed studios, network operators, and others to compile scenes featuring a single actor across films into a single YouTube video, often a few hours long. And I’ve rewatched those featuring the talents of Vadivelu and Goundamani (separately) several hundred times. In fact, I’d wager there isn’t a scene featuring Vadivelu I haven’t watched, and that would likely go for most Tamilians. I also know his lines by heart and they haven’t become more boring with repetition. I only know the names of a few films, however, but in Vadivelu’s case everyone knows that doesn’t matter. His scenes often stand on their own.
Firstly, we believe this to be the first time a non-trivial error in a research level physics paper has been identified through the process of formal verification. … Secondly, this was one of the first research-level papers where formalization was attempted, and it was not chosen with the intention of finding an error, but rather because we thought the process of formalization would be easy and the likelihood of an error was low. From this one could make the worrying extrapolation that there are many such errors in the physics literature. It is also a strong motive for making formal verification the gold standard for physics papers.
This is interesting because I haven’t heard of published theoretical physics papers being flawed in the same way I’ve read about such papers in, say, psychology or behavioural economics. I am of course extrapolating from my personally small knowledge base: theoretical physicists may find this surprising, although I haven’t seen signs of that. To thecontraryin fact.
Wait till mathematicians and automated proof checkers hear about path integrals https://t.co/WuEPCCDabC
I recently found out more about Lean and formalisation (for my piece in The Hindu about mathematicians’ efforts to formalise and then verify Maryna Viazovska’s work that won her the Fields Medal in 2022). Formalisation is the task of translating ‘human’ proofs of maths problems in the language of a machine in great detail; the language here is Lean. Conlon’s point is that if Lean found errors in a relatively simple calculation, it’d be in for worse in the face of path integrals, which are the foundation of almost all modern quantum theory but also quite messy.
In the course of reading more about this, I came across a curious December 2024 post on the Xena Project (of “mathematicians learning Lean by doing”) blog by Imperial College in London professor of pure mathematics Kevin Buzzard. He wrote that when some mathematicians were using Lean to check some advanced maths proofs, Lean found that a particular step in a 1965 paper by a mathematician named Norbert Roby appeared to be wrong. It mattered because a branch of mathematics called crystalline cohomology, which had been built on Roby’s work and used by many mathematicians since the 1970s technically had a gap in its foundations.
But nobody thought the math was actually wrong, only that the written proof had a hole in it. However, in formal mathematics, a result being ‘probably fine’ isn’t good enough; as Buzzard put it, “you have to actually fix it” rather than rest on the idea that it’s fixable. At some point, the noted American mathematician Brian Conrad caught wind of the mistake and, after looking into it, found a fix: he knew that a different proof of the same result existed in the appendix of a book by Pierre Berthelot and Arthur Ogus. So a crisis was averted. But then came the twist: Buzzard later had lunch with Ogus and gleefully told him how his work had saved the day. Ogus’s response: “Oh, that appendix has several errors in it. But I think I know how to fix them.”
Buzzard also mentioned one defence of mathematical ideas that had problems at their foundations that struck me, too — that “crystalline cohomology has been used so much since the 1970s that if there were a problem with it, it would have come to light a long time ago” — but UniDistance Suisse mathematics professor David Loeffler countered it in the comments saying the risk is that mathematicians might be placing too much stock in the idea that something can be fixed when it really may not be that way, and could even be “collectively wrong in their estimation”. How could this be possible?
I’m learning the answers have to do with how mathematicians work together. For instance when they productively use some framework, like crystalline cohomology, for decades to generate papers and careers, they also create an enormous social pressure rooted in the idea that the framework works. They also assume — reasonably — that the framework has already survived the questions that could have revealed flaws in its foundations. But as Buzzard’s and Loeffler’s exchange shows, it’s likely that that question has already come along but there’s no guarantee. Moreover even if there’s a flaw it might show itself only in particular applications and the rest of the time the framework can appear to be ‘functioning’ as expected.
(This to me also speaks to the unsuitability of using buildings or similar structures in the real, physical world as metaphors to communicate their nature — at least not without also drawing on, say, the idea that those particular applications don’t stress the framework’s specific weak points.)
Then of course there’s the problem with how mathematicians build on each other’s work, which is also a problem with peer-review as well. The norm is to verify that argument B follows in valid ways from argument A and not whether argument A is itself valid, nor is it to derive argument B from scratch. This way an error in one old paper can spread through citations for many years, with each subsequent mathematician correctly reasoning from a flawed starting point.
This is reminiscent of the Schön scandal in the 2000s. The German physicist Jan Hendrik Schön fabricated data on organic superconductors and field-effect transistors but by the time his fraud came to light, other groups had begun building on his results, including designing experiments premised on his findings being real. So when Schön was ultimately exposed, a not insubstantial chunk of the field had to be unwound. For an even more dramatic example from history: the HeLa cells from Henrietta Lacks are extraordinarily robust and, it turned out, had silently contaminated a large fraction of cell cultures in labs worldwide from the 1950s onward. So researchers who believed they were studying prostate cancer cells, breast cancer cells or other lines were in many cases actually studying HeLa cells. But unlike the case with zombie citations, in both cases the ‘secondary’ scientists had no idea they were studying a wrong thing.
I used these examples because it seems the ways in which mathematicians fail could be the same ways in which scientists more broadly fail: due to common blindspots, foundational assumptions that nobody thinks to double-check (or which they assume others have checked), social structures that reward ‘progress’ more than other processes like auditing, and, overall, by underestimating the importance of social forces to the way scientists and mathematicians organise and share their work. Lean et al. are, or formalisation more broadly is, thus forcing mathematicians to look past these assumptions and, as the arXiv paper’s author Joseph Tooby-Smith wrote, that they’re already finding gaps in fundamental ideas suggests the foundations of rational inquiry may be somewhat less certain than a discipline’s reputation alone might imply.
Google News picks up on science stories that many outlets are covering. Its reasoning is that the more outlets publish a particular story, the more reader interest the story has. However, the flaw here is that news outlets don’t evaluate all kinds of science developments on an equal footing nor do they always focus on reader interest. (The latter is more so since news outlets often don’t select stories for reader interest; instead they select stories for the reasons described below, then work in the reader interest.)
Outlets focus on those that they can understand or which they can cover for a lower cost. The former is almost always a major development — which is rare in science, as research is fundamentally incremental — or a finding that has been misreported in a university press release or in fact at the journal itself, e.g. if the paper title is itself oversimplified.
The latter — findings that can be covered at a lower cost — are typically simple, whose significance or wonderfulness is easy to communicate, e.g. “astronomers produce the largest image ever taken of the heart of the Milky Way” or “with lunar missions looming, scientists grow chickpeas in ‘moon dirt’”.
Altogether, the science stories the press has focused on have systematically avoided more involved topics or ideas, those that can’t be communicated easily, and those that require some expense (e.g. the services of a veteran reporter or freelancer, of a graphics team, etc.). Since Google News, and Google Discover by extension — which is also driven by what readers are interested in — drive a lot of traffic to news websites, and page views and unique users remain the metrics of choice at these websites, Google News/Discover also aggregate and create a preference among publishers for relatively uncomplicated science stories and ideas.
Which means when we pursue stories that are complicated or interesting in a way that allows us to tell a unique story, we shouldn’t expect it to draw readers from Google News/Discover nor focus on page views or unique views. Instead, we’re better off focusing on readers’ average time on page.
Everyone who knows me knows that my intellectual coordinates are defined by scientific ideas, even when they’re about sociology or the humanities. This is why I found a new book, Decolonial Keywords: South Asian Thoughts and Attitudes, edited by anthropologists Renny Thomas and Sasanka Perera, so compelling. The book has 30 chapters written by 33 people, each one exploring the oft-hidden colonial undertones of words in everyday Indian English, and by extension documenting how deceptively treacherous the task of decolonialising the things the words refer to is — and many of them intersect with science in practice.
Indeed my own entry point into this book was half my general interest in Renny’s work, which to an amateur historian of science like me has been constantly insightful, and half my long-standing frustrations with how India and the Indian state commemorate science. On the occasion of National Science Day, which is today, I had an op-ed published in The Hindu on February 26 on why decolonialising science in India also requires Indians to “de-Nobelise” science, including shedding their fondness for individual geniuses in favour of the collective labour that science actually needs to function. Excerpt:
The keywords … clarify what a de-Nobelised imagination of science, paralleling the decolonisation of science, would require. It would force India to ask how Indians produce the thing called ‘recognition’ — through discoveries and papers as much as by institutions that sort labour into celebrated and hidden.
National Science Day, then, should not simply reproduce a Nobel-shaped story about genius and external validation. It should become an annual day of discussion of what counts as science, including the work of technicians, field staff, nurses, lab attendants, data collectors, and others whose labour is essential to make new knowledge but is rarely commemorated.
Good scientific practice requires us to regularly recalibrate the instruments to make sure they haven’t become less precise. Language, Decolonial Keywords shows, is the same way and we need to constantly recalibrate it for the same reasons.
For example, a mind accustomed to scientists’ oft-universalist claims will find the book unsettling because of how consistently it exposes such universalism to be a hoax. In her chapter, Centre for the Study of Developing Societies political theorist Prathama Banerjee has explored the idea of “shunya”. The global history of mathematics celebrates this entity, commonly equated to the entity called zero, as India’s gift to the world — a numerical placeholder that liberated mathematics from physically counting objects and eventually making calculus and modern computing possible. But if you keep reading, you’ll find that “shunya” was originally a profound ontological concept in Buddhist philosophy, an expression of emptiness and the absence of a permanent ‘self’. And that when modern mathematics extracted the concept, it discarded the philosophical attachments, effectively stripping the word of its ability to critique social hierarchies like caste, which in fact banks on the illusion of a permanent ‘self’.
In addition to the book’s chapters on ‘jugaad’, ‘poromboke’, and ‘laboratory’, which I tried to explore in my piece, the same theme is also on display in the chapter on “Igu”, the shaman of the Idu Mishmi people in Arunachal Pradesh, especially the tension between Western scientific taxonomy and indigenous ecological networks, written by Ambika Aiyadurai and Razzeko Delley, and the chapter on “Adivasiyat” by Roshan Praveen Xalxo.
Under the gaze of either modern medicine or conservation biology, a shaman comes across as a psychological curiosity and indigenous land rights as a consequence of politics. However, as Aiyadurai, Delley, and Xalxo set out, the words “Igu” and “Adivasiyat” really recall a “multispecies world” or a “multibeing cosmos” — recalling the writing of anthropologist Anna Tsing in 2013 — where rivers and spirits participate in making and maintaining the ecological network. And we don’t have to abdicate the scientific method to recognise that these indigenous vocabularies offer a sophisticated and importantly localised understanding of an environmental balance that the technocratic and extractivist models of the modern Indian state are themselves abdicating.
My natural scepticism sometimes (and only sometimes) flares up when I find the word “decolonial” because too often these days, and almost always in certain political contexts, “decolonialising science” in the contemporary Indian context has become a Trojan horse for right-wing nativism, where mythological allegories are retrofitted as ‘ancient’ quantum physics and surgery. But to their credit, Thomas and Perera and the chapters’ various authors are acutely aware of and make honest attempts to sidestep this danger. For example Harshana Rambukwella’s chapter on “Chinthanaya”, the Sinhala term for “thought” or “indigenous epistemology”, is carefully to separate its origins as an anti-colonial concept from how the island country’s majoritarian nationalists weaponised it during the COVID-19 pandemic to push some medical professionals to promote one charlatan’s “divine syrup” as a cure.
Decolonial Keywords is a dense book steeped in the theoretical frameworks of history, sociology, anthropology, and linguistics. The chapters dealing with the literary nuances of medieval poetry and the exact etymological roots of regional dialects in particular require quite a bit of patience — but the intellectual payoff is guaranteed. It’s also nice to have critical work like Decolonial Keywords that presents morsels of analysis and perspectives on a variety of topics because in this field, it’s generally an entire book on a single topic.
A quantum battery is a system that stores energy and whose working parts are quantum systems, such as atoms, ions, spins, superconducting circuits or quantum dots, so the processes of storing and extracting energy are governed by quantum mechanics.
Imagine you have a row of toy boxes. Each box can either be empty or have one object inside (like a toy). Your job is to fill the boxes as fast as you can using a machine that can put toys into boxes.
When a qubit is in its low-energy state, it’s like an empty box. When it is in its higher-energy state, it’s like a full box. When all the qubits go from low- to high-energy, the whole setup stores some energy.
Now, if you have many boxes, scientists have found that you can fill them faster using a quantum trick, instead of filling each box one by one.
Say you have 12 boxes on a table. You point at box 1 and put in a toy. Then you point at box 2 and put in a toy. And so on. You can try to do it quickly but you’re still basically doing one box at a time.
Scientists recently reported an experiment that this simple tale is a metaphor for. It consisted of 12 qubits (short for ‘quantum bits’ — the smallest logical pieces of a quantum computer).
The experiment’s point was to drive each qubit locally, i.e. each one gets its own little push. The study called this the classical baseline.
Now, imagine you have a different machine that, instead of filling one box at a time, fills two neighbouring boxes together in a single move.
So it does something like fill boxes 1 and 2 together, then fill boxes 2 and 3 together, then fill boxes 3 and 4 together, and so on.
The machine still isn’t filling all 12 at once but because it can create pairs of fills together, the filling can become more collective, like a wave of filling through the row.
In the experiment, the scientists used a special kind of interaction where two excitations are created together on neighbouring qubits. As a result these two neighbours tended to flip together, going from empty-empty to full-full.
(To achieve this, the team used a technique called parametric modulation.)
Now, say two kids, A and B, are filling boxes in these two different ways.
You’re trying to check which kid fills all the boxes fastest.
If Kid B, who’s using the pair filling technique, only wins because their tool is stronger, that’s not interesting. The interesting claim is that even with fair tools, the pair-filling technique can store energy faster.
In a quantum device, not all stored energy is equally extractable as useful work. Instead the study uses a standard concept called ergotropy, which is the part of the energy that you can, in principle, extract as useful work with allowed operations.
For our metaphor, you can treat it as the amount of real charge you put in the boxes.
Then the scientists calculated the average charging power, i.e. how much useful energy got stored per unit time.
They did this for batteries of different sizes: 2 boxes, 3 boxes, … up to 12 boxes.
They found that the pair-filling, i.e. quantum, method could achieve more optimised charging power than the classical baseline and that the advantage tended to grow as the number of qubits increased.
They also reported that the optimal charging time window was very short, on the order of tenths of a microsecond.
This means Kid B has a short interval in which they can fill boxes very efficiently, and that interval stays around the same short length once there are several boxes.
But the scientists don’t just say their quantum way is faster. They also show that it’s faster for the reason they claim.
They measured the correlations between neighbours — i.e. whether excitations (or full boxes) appeared together more often than they’d expect if each box was independent.
In the classical way, they expected a neutral value, meaning no togetherness. For the quantum way, they expected more togetherness during the burst of charging.
They reported evidence consistent with the latter: in the quantum way, the neighbour-neighbour correlation indicator showed more paired behaviour in the same short window when the charging power peaked.
So where is the quantumness that provides this advantage?
In the normal world, a box is either empty or not empty. It’s one or the other.
In the quantum world, an object can also be in a special in-between condition — and not just because you don’t know what’s in the box. It’s a real physical kind of in-between that scientists call coherence.
When many quantum objects interact, they can also become linked in a way that makes their joint state not just the equivalent of ‘each box has its own toy’ but of ‘the whole set is described together’. This is called entanglement.
The study tried to show that during charging, the system wasn’t merely populating its excited states: it was also creating coherence and entanglement. A purely classical process can’t do this; quantumness must have been involved.
The scientists did this by measuring how many qubits were excited versus how many weren’t. Then they measured the total usable stored energy (ergotropy). Whatever was left after subtracting the plain part was the quantum-like part.
Finally, they checked whether the qubits were becoming entangled with each other, instead of acting independently. They did this by collecting measurement data, computing a quantity that has a clear rule, then saying from that whether the qubits could be entangled.
For instance, if the rule says no unlinked system can score above 10 on this test. So if the scientists measured 12, the system has to be entangled.
The scientists effectively showed that if they design the tool correctly and compare them fairly, the quantum way could flip more qubits than the classical way for up to 12 qubits, and not just by flipping the qubits one after the other faster.
Featured image: A visual representation of the ‘quantum battery’ used in the study. It’s encoded in a 16-qubit lattice, 12 of which were activated for the experiment. Credit: arXiv:2602.08610v1.
The science writer Philip Ball has described “nerd tunnel vision” as the rationalisation scientists who maintained ties with Jeffrey Epstein after his 2008 conviction for soliciting underage sex offered, hinting at something more calculated than just oversight. “Nerd tunnel vision is a defining feature of much of the Edge discourse,” Ball wrote, referring to Epstein consort John’s Brockman’s salon for “Third Culture” intellectualism: “moral obtuseness; a determination to win the argument rather than to listen and ponder; a tendency to fabulate improbable futures from narrow ‘rational’ logic; ignorance of and contempt for other ways of seeing the world.”
Ball is in effect describing not a people that’s unaware or who failed to notice but a people who deliberately, with eyes wide open, chose what matters to them — from Lawrence Krauss, Marvin Minsky, and Robert Trivers to Joichi Ito and Peter Thiel. This isn’t naïveté so much as sophisticated actors making sophisticated calculations about what they can get away with.
As Epstein’s ties to more and more scientists, technologists, and venture capitalists has become apparent, there’s also a diagnosis doing the rounds that Silicon Valley’s techno-elites, a.k.a. the “tech-bros”, are simply mistaken in their embrace of topics purportedly close to Epstein’s heart, including transhumanism, longevity research, and what increasingly looks like repackaged eugenics. This diagnosis flatters us — the diagnosticians — by positioning us as the clear-eyed ones who saw the cautionary tales from a bygone era for what they were. But the science-bros and techno-libertarian elite (or TLE for short) saw them too, and proceeded to run a different cost-benefit analysis from what others did.
To see why, it’s important to see first that the story Silicon Valley tells about itself — of garage startups and disruption — obscures a more troubling, if also equally deliberate, genealogy. Computer scientist Timnit Gebru and philosopher Émile Torres coined the label ‘TESCREAL’ as a critical construct to describe an overlapping cluster of ideologies, many of which sank roots in the 1990s, that Silicon Valley embraces today: transhumanism, extropianism, singularitarianism, cosmism, (Bay Area internet) rationalism, effective altruism, and longtermism.
Extropianism and organised transhumanism have been adjacent to the Bay Area for a while, with newsletters, salons, and institutes linking human enhancement and “self-transformation” to an explicitly technologist ethos liberated from worrying about limits, whether material or social. This worldview fit neatly with a Valley culture already comfortable with narratives of radical innovation and libertarian politics. In the 2000s, singularitarian ideas also moved from niche futurist conclaves into mainstream tech discourse via high-profile evangelists and Silicon Valley institutions. The 2010s saw rationalist and effective altruist networks overlap with AI labs, venture capital, and philanthropy, specialising in translating moral philosophy and speculative technical futures into funding priorities and institutional agendas. By the 2020s, once frontier AI became the Valley’s central product, these once-semi-separate strands of thought started to resemble a unified milieu.
This setup also has a prehistory that further complicates any argument that TLEs have just been stumbling in the dark when they make choices we won’t. One part of the prehistory is the mid-20th-century scientific elitism that shaded into eugenics. William Shockley helped invent the first semiconductors and transistors and is a foundational figure tied to Silicon Valley’s early industrial formation, and later became publicly associated with racist and eugenicist claims while at Stanford University. While Shockley’s views were on the fringe even then, transhumanism’s closest antecedent is Anglo-American eugenics (the term was first used by British eugenicist Julian Huxley). So when Nick Bostrom — a central figure in these movements and whose work at Oxford University’s Future of Humanity Institute has explicitly noted its pull on Silicon Valley elites and funders — was revealed in 2023 to have sent emails in 1996 stating his belief in racial differences in intelligence, should we treat this as an aberration or as a data point in a larger pattern?
A second wave of the prehistory, from the late 1980s to the 2000s, is the Silicon Valley’s own flavour of transhumanism, especially in the form of cryonics, what it called “morphological freedom”, and brain-computer futures. The Extropy Institute was an early node in this movement and its idioms fit Silicon Valley’s entrepreneurial culture of working without constraints. By the mid- to late-2000s, ‘singularity’ — the hypothetical moment in future when AI surpasses human intelligence and triggers rapid, uncontrollable technological growth — also became a popular rallying point. The third and final wave kicked on in the 2010s and is still surging: from just talking about living forever, the tech bros set up labs, biotech pipelines, and ecosystems for “consumer biohacking”, emblematised by Alphabet’s Calico Labs. In the 2020s, finally, conversations about reproduction and eugenics moved from being fringe rhetoric to ‘gray-zone’ products and venture-backed firms.
Today there are companies marketing expanded embryo screening not just for severe disease risks but for probabilistic traits, including — controversially — cognitive outcomes. One October 2024 investigation by The Guardian described a US startup selling embryo screening framed around gains in IQ and “liberal eugenics” concerns, essentially making dubious genetic advantages selectable for those who could pay. A November 2025 report in the Wall Street Journal described another San Francisco startup pursuing embryo gene-editing research despite legal prohibitions, backed by prominent tech investors and looking for permissive jurisdictions.
None of these are decisions born of being unaware, to be sure. The TLEs may believe what appears dystopian through the moral lens of the 2020s could become normalised or even celebrated by society of the 2040s. Or they understand these are cautionary tales but believe the aspects warranting caution are either exaggerated or can be managed with better execution. Many of these figures are also staunch materialists and techno-determinists who don’t harbour the humanistic assumptions underlying most science fiction writing. When Aldous Huxley warns them about a society engineering away suffering using pleasure, surveillance, control, and punishment, they may genuinely see that as solving a problem rather than creating one. This is because the caution depends on valuing things like struggle, authenticity, and inefficiency, of which they’re usually dismissive. Which is why Mark Zuckerberg spending $10 billion to attempt to create the ‘Metaverse’ isn’t a failure of imagination but the success of a different imagination.
Some TLEs also recognise exactly where this leads and view the resulting instability, disruption, and concentration of power as features rather than bugs because periods of chaos create opportunities for those positioned to profit from it. They might also believe that by being aware of the cautionary tales they’ve inoculated themselves against the specific failure modes the tales came with: “We’ve read 1984 so obviously we won’t make those mistakes.” This is of course hubris but importantly it’s not ignorance.
Which brings us back to Jeffrey Epstein and the scientists who orbited him: his connections, the TESCREAL ideologies, investments in longevity and embryo selection startups, the pronatalist conferences, the eugenicist discourse — none of them was a separate issue. They’re just different manifestations of the same underlying orientation: to treat human ‘limits’ as engineering problems, then fund private bets to overcome them, with little regard for what social harms they accrue along the way.
Critics have also argued that these philosophies have encouraged TLEs to shift attention away from solving present humanitarian issues and towards speculative futures. This appeal works on Silicon Valley elites who fund institutes dedicated to this thinking because it allows them to frame their anxieties about death, intelligence, biological limits, and control as moral imperatives that transcend democratic deliberation. The longtermism of MacAskill of which Musk is so fond contextualises efforts in terms of billions of humans not yet born: how convenient, then, that those billions can’t vote, can’t organise, and can’t contradict the projections made on their behalf.
The pitfall in believing the TLEs are making the choices they are simply because they don’t know what we know is that it excuses us from confronting the possibility that they’ve concluded that the engineered, surveilled, controlled, stratified future they envision is in fact the entire point. Perhaps they’ve decided that when history is written by the posthuman victors, today’s cautionary tales will look like Luddite panic. Perhaps they’ve calculated that by the time the negative externalities become undeniable, they’ll already have captured enough of the gains to insulate themselves from the consequences. Or maybe they genuinely believe they’re doing good, that longer lifespans and enhanced intelligence and space colonies are moral imperatives, that anyone who can’t see this is simply thinking too small. In which case we’re not dealing with cynicism but with a totalising ideology that has convinced itself it holds the keys to human flourishing, and the fact that this ideology concentrates benefits among people who already have the most power is treated as a happy coincidence.
That doesn’t mean we must ban all life extension research or stop developing AI — but we should stop treating these as purely technical pursuits and instead recognise that every choice about what to study is also a choice about what kind of future we want and who gets to decide. We should insist that technological sovereignty isn’t just the capacity to build things but the capacity to deliberate together about whether we should build them at all. And, finally, we should stop giving these actors the benefit of the doubt. They’re not naïve. They’re not mistaken. They understand perfectly well what they’re building and they’re building it anyway.
The nucleus of the thorium-229 isotope has a special property: it has an excited state that’s incredibly close in energy to its ground state. The existence of such an isomer is remarkable because when nuclei normally get excited, they need enormous amounts of energy — hundreds of thousands or even millions of electron volts (eV). But the Th-229 nucleus’s excited state is only about 8.4 eV above its ground state. This is really small by nuclear standards and, importantly, it means light can excite the nucleus into this energy level.
This in turn matters because scientists have developed very precise atomic clocks over the last few decades that work by using lasers to excite electrons in atoms and measure the frequency of the light required to do this. These clocks are so accurate that they’re used for GPS, keeping time on the internet, and in fundamental physics experiments. But they also have a limitation: electrons are relatively easy to disturb, so a stray external electric or magnetic field can shift their energy levels slightly but enough to make the entire clock less stable.
Nuclei on the other hand are much smaller and are buried deep inside the atom, shielded by the electron cloud from the world beyond. So a nuclear clock based on a nuclear transition would potentially be much more stable and accurate than even the best atomic clocks.
The Th-229 isomer is the only nuclear transition that’s low enough in energy for scientists to realistically build a laser to make happen. In fact they have been trying to make a nuclear clock based on this transition for years now. Recently, two research groups finally managed to create this transition using lasers and they determined that the wavelength of light needed is 148.4 nm. This is in the vacuum ultraviolet range — i.e. ultraviolet light with a very short wavelength. Such light gets absorbed by air so they need to operate in a vacuum. Thus the name.
But here’s the catch: the laser sources that these research groups used to excite the transition were pulsed lasers, which means they only produced light in very short bursts, lasting just a few nanoseconds each.
When you have such short pulses, the light inherently has a broad range of frequencies mixed together. Scientists say the linewidth is several gigahertz wide. But the natural linewidth of the Th-229 isomer transition is very narrow, only about 60 microhertz. That’s a difference of several orders of magnitude. It’s like trying to measure something with a 1-m-long stick when you need precision down to the width of a single atom. Nuclear clocks demand a much more stable laser with a really narrow linewidth — ideally continuous rather than pulsed.
In a paper published in Physical Review Applied on February 11, researchers from Tsinghua University and the Chinese Academy of Sciences have proposed a way to generate a continuous-wave vacuum ultraviolet laser light at exactly 148.4 nm, with a very narrow linewidth, using a process called four-wave mixing.
Four-wave mixing is a nonlinear optical process. Normally, when light passes through a material, it just passes through without the different colours of light affecting each other. But if you have intense enough light and the right kind of material, you can get nonlinear effects, i.e. where multiple photons of light interact with atoms in the material to create new photons at other frequencies.
In four-wave mixing, you take three laser beams and send them through such a special medium. If everything is set up just right, they will combine to create a fourth beam at a new frequency. And the frequency of this new beam will be the sum of the frequencies of the three input beams.
The authors have proposed using cadmium vapour as the mixing medium. Cadmium because it has many properties that make it perfect for this job. First, it has electronic transitions that can be exploited to make the nonlinear process very efficient. Specifically, the team plans to use a two-photon resonance, meaning two of the input laser beams will have frequencies that, when added together, will exactly match the energy needed to excite cadmium atoms to a particular excited state. This resonance will greatly enhance the efficiency of the process. Second, the wavelengths of the lasers required to produce the desired output are readily available (of wavelengths 375 nm and 710 nm).
The two previous studies also used four-wave mixing but ended up with pulsed laser light because they used xenon as the mixing medium. Xenon is a generic choice because it results in light of a wide range of wavelengths. If researchers are exploring and don’t know exactly what wavelength they need or if they do want to use light of different wavelengths, xenon is great. On the flip side, it isn’t particularly suited to generating 148.4 nm light. Rather, it can if researchers can supply the input light at enormous power.
Pulsed lasers help with this requirement using a trick. Imagine you’ve a water hose: if water flows out continuously at a steady rate, you might get a gentle stream, but if you put your thumb over the end and suddenly release it, you get a powerful jet that can spray much farther even when the total amount of water per minute is the same. Pulsed lasers work like this: at the brief moment when the laser emits light, the intensity is very high even though the average power is low. And four-wave mixing is much more efficient with this intense light — enough to generate enough vacuum ultraviolet light to detect the nuclear transition.
To this end, the paper went into considerable technical detail about calculating how efficient using cadmium vapour would be, including assessing the element’s atomic structure. The authors also calculated something called the nonlinear susceptibility, which said how strongly the cadmium atoms would respond to the light.
They also had to worry about phase-matching. For the four-wave mixing process to work efficiently, the different light waves need to stay synchronised as they travel through the medium. This is tricky because different wavelengths of light travel at slightly different speeds through cadmium vapour (a phenomenon called dispersion). However, the authors showed that carefully controlling the temperature of the vapour and tightly focusing the laser beams could result in good phase-matching.
Overall, their calculations suggested that with input laser powers of 3 W at 375 nm and 6 W at 710 nm — both very achievable using current technology — they could generate more than 30 µW of vacuum ultraviolet light at 148.4 nm. While 30 µW may not sound like much, it’s actually a lot for spectroscopy experiments. More importantly, because this is a continuous-wave process rather than a pulsed process, and because it’s essentially just a frequency multiplication of stable input lasers, the output light should have a very narrow linewidth. The team estimated it could be below 1 kHz, which is orders of magnitude better than the pulsed sources currently in use.
A narrow linewidth is so important because then scientists can observe something called Rabi oscillations in the nuclear transition. This is when you can coherently drive the nucleus back and forth between its ground state and excited state, which is essential to build a nuclear clock. The researchers showed that with their proposed laser system, the linewidth would be narrow enough to observe these oscillations, opening the door to much more precise measurements of the Th-229 transition and eventually to building an actual working nuclear clock.
Such a clock could have applications beyond just timekeeping. The Th-229 transition is particularly sensitive to changes in fundamental constants of nature, so it could be used to test whether these constants actually stay constant over time; scientists could also use it to search for certain types of dark matter. The proposed laser system thus represents a crucial technological step towards all these applications.
One of the advertisements during the ongoing T20 cricket World Cup on Star Sports India has been for Sprite, the carbonated beverage from the Coca-Cola Company. In the ad, it’s a hot day, two people are irritated by the heat and humidity, and they beat it by taking a swig of chilled lime-flavoured Sprite.
It should be obvious by now but in case it isn’t — in fact the manufacturer and the advertiser are either unaware of this or they know but don’t care — a sugary carbonated beverage is a terrible thing to consume on a hot, humid day in order to feel better.
The chill alone can feel quite relieving. However, carbonation doesn’t meaningfully improve hydration and in some people causes bloating and burping and/or induces a full feeling that can prevent the person from drinking other fluids, especially water. Ingesting carbonated fluids can also worsen heat-stress or nausea.
The sugar of course makes it all worse. This is Sprite’s sugar content according to Coca-Cola:
A large quantity of sugar — not unlike the amount in a 200 ml bottle or larger — can for many people slow the rate at which the stomach empties and exacerbate thirst, especially if you drink a lot at once. If you’re sweating heavily already, a sugary drink sans enough electrolytes is far from ideal for replacing what you lose.
When pushed on such unhelpful advertisements, these manufacturers, advertisers, and promoters have typically replied saying their food products’ contents are within the FSSAI limits. They’re right — but the FSSAI’s limits are based on the contents being safe while assuming all other conditions ideal. They’re not based on you consuming Sprite on a hot and humid day.
On Monday night, I kid you not, I dreamt of the Birch and Swinnerton-Dyer conjecture. It was only by name, a fleeting mention in a heated conversation I was having with a friend. I’m not sure who spoke it or why.
When I woke up, I looked it up, and found that it’s one of the Millennium Prize problems — one of seven unsolved mathematical problems for each of whose correct solutions the Clay Mathematical Institute offers an award of $1 million.
I’m vaguely familiar with these problems’ names, and the substance of only three, so after the dream, I resolved to understand the conjecture and why it remains unsolved. Here goes.
Let’s start at high-school maths.
The equation y = 2x + 1 is a straight line on a graph.
For any given value of x, there’s only one corresponding value for y.
Similarly, in high school, you’d have learnt that the equation for a circle is: x2 + y2 = 1.
If you look for points on this circle where x and y are fractions, i.e. where they’re rational, you’ll find plenty.
For example, (⅗, ⅘) is such a point on the circle because (⅗)2 + (⅘)2 = 1.
The Birch and Swinnerton-Dyer conjecture is about elliptic curves rather than circles.
Despite the name, these curves aren’t ellipses. An elliptic curve is defined by an equation that looks like this:
y2 = x3 + Ax + B
Let’s say A = -1 and B = 1. The equation becomes: y2 = x3 – x + 1
If you plot this equation on a graph, you’ll get a smooth, flowing curve.
Mathematicians are obsessed with finding the rational points on these curves, i.e. points where both x and y are fractions.
For some elliptic curves, there are only a few rational points. For other elliptic curves, there are infinitely many.
The question is: how can we tell, just by looking at the equation, how many rational points it has?
A fascinating property of elliptic curves is that you can add points together.
If you take two rational points on the curve, called P and Q, draw a line through them, and see where that line hits the curve a third time, that third point — after reflecting it across the x-axis — will also be a rational point.
Mathematicians call this point P + Q.
This is how elliptic curves have a ‘rank’.
If a curve has rank = 0, there are only a finite number of rational points on the curve. You can add them all day but you’ll keep finding the same few spots again and again.
If a curve has rank ≥ 1, it has infinitely many rational points. You can generate them by adding the rational points together to travel all over the curve.
The Birch and Swinnerton-Dyer conjecture an attempt to calculate this rank using a completely different part of maths.
To solve a difficult problem, mathematicians often try a simpler version first.
For example, in order to calculate the rank of an elliptic curve, mathematicians looked for solutions in modular arithmetic.
Consider a clock, whose numbers are modulo 12. In normal counting, 10 + 5 = 15. But on a clock, 10 + 5 = 3. This is because once the count hits 12, it resets. Since 10 + 5 = 10 + 2 + 3 = 12 + 3, you’re left with 3.
This is what modulo 12 means.
You can do the same thing with an elliptic curve equation.
You pick a prime number p (like 2, 3, 5, 7, 11…) and ask: how many integer solutions are there if we only care about the remainder when divided by p?
For instance, let’s use the elliptic curve y2 = x3 – x + 1 with p = 5.
We want to find all solutions (x, y) where the values of x are picked from the set {0, 1, 2, 3, 4} — since these are the possible remainders when divided by 5 — and the equation holds modulo 5.
This means:
1. Pick a value of x from {0, 1, 2, 3, 4}
2. Calculate y2 = x3 – x + 1 using normal arithmetic
3. Find the remainder when you divide that result (y2) by 5
4. Now find a y from {0, 1, 2, 3, 4} such that y2 has that same remainder when divided by 5
So let’s check each possible value of x:
x = 0 so y2 = 1. Is there a y in {0, 1, 2, 3, 4} whose square equals 1 mod 5? 1 or 4
x = 1 so y2 = 1. Is there a y in {0, 1, 2, 3, 4} whose square equals 1 mod 5? 1 or 4
x = 2 so y2 = 7. Is there a y in {0, 1, 2, 3, 4} whose square equals 7 mod 5? None.
x = 3 so y2 = 0. Is there a y in {0, 1, 2, 3, 4} whose square equals 0 mod 5? 0
x = 4 so y2 = 61. Is there a y in {0, 1, 2, 3, 4} whose square equals 61 mod 5? 1 or 4
So when p = 5, the elliptic curve y2 = x3 – x + 1 had seven solutions.
Now, let Np be the number of solutions for a specific prime p. Because there are only p possible values for x and y in this scenario, finding Np is easy.
Let’s use the same example.
Since we’re working with modulo 5, both x and y can only be from {0, 1, 2, 3, 4}. That’s only five possible values each.
And for each x, we only had to check at most five values of y. That’s at most 25 checks in all — which is very easy for a computer.
Studying the curve modulo p, for many different values of p, yields information about the original curve over the rational numbers.
Specifically,finding all the rational points on the curve y2 = x3 – x + 1, e.g. (0,1), (1,1), (-1,-1), etc., is extremely difficult. There could be infinitely many and they could involve large numerators and denominators.
But for each prime p, counting how many solutions exist modulo p is easy: you just need to check all p2 possibilities.
Notice also how for any given p, there are also around a p number of solutions on average.
The number of solutions per possibility contains information about the rank of the elliptic curve.
This connection happens via the L-function.
In the 1960s, Bryan Birch and Peter Swinnerton-Dyer had a radical idea. They wondered if the number of solutions Np for various values of p could reveal the rank of the curve.
They created the L-function to hold this information, written L(E, s). This is a complex function built using all the Np values for every prime number p.
If a curve has many rational points, i.e. a high rank, we’d expect it to also have a high value of Np. If the curve has few rational points, Np should also be low.
L(E, s) is a function of the variable s.
Birch and Swinnerton-Dyer used a computer — then a room-sized machine called EDSAC 2 at the University of Cambridge — to calculate these values.
They noticed a stunning pattern.
Recall that for a given p, there are around a p number of solutions on average.
If Np > p, the curve was said to have more solutions than average for that prime.
If Np < p, the curve was said to have fewer solutions than average for that prime.
Birch and Swinnerton-Dyer checked what happened when they multiplied these results together for thousands of primes. Their product looked like this:
In words, this formula asks: across all the prime numbers up to a certain limit X, is the elliptic curve consistently producing more solutions than average or fewer?
When they plotted this formula on a graph, they noticed a clear divergence based on the rank of the curve.
If a curve had only a finite number of rational points, the product fluctuated a bit but remained relatively small and stable.
If the curve had infinite rational points, the product started to grow. The more primes they included in the calculation, the larger the product became.
Here’s a visual.
In the top graph, the blue curve has rank 0, so you see the product fluctuate but stay relatively small and bounded. The red curve has rank 1, so the product grows significantly larger.
The bottom graph shows the same curves on a logarithmic scale, revealing the pattern over a larger range of values. The blue curve stays relatively flat with small oscillations while the red curve continues to surge upwards.
Overall, Birch and Swinnerton-Dyer noticed that curves with finite rational points, i.e. rank 0, had a relatively bounded product. And curves with infinite rational points, i.e. rank ≥ 1, had a boundless product.
Ergo, higher rank means faster growth.
The product that Birch and Swinnerton-Dyer computed is closely related to the L-function.
How?
For each prime number p, they defined a variable ap = p + 1 – Np
ap measures how Np differs from the expected value p + 1.
If Np = p + 1, then ap = 0, i.e. it’s exactly average.
If Np > p + 1, then ap < 0, i.e. there are more solutions than average.
If Np < p + 1, then ap > 0, i.e. there are fewer solutions than average.
The L-function makes use of the ap value thus:
In sum, the behaviour of the product as X grows is mathematically related to whether L(E, s) has a zero at s = 1.
If you plug s = 1 into the L-function and get 0, the corresponding elliptic curve E has at least some infinite points.
But if the L-function hugs the zero very closely, the rank of the elliptic curve E is higher.
Thus, Birch and Swinnerton-Dyer conjectured: the rank of an elliptic curve is equal to the order of the zero of its L-function at s = 1.
When a function equals zero at some point, the ‘order’ says how strongly it touches zero.
If the order is 0, the function doesn’t actually equal 0 at that point. If the order is 1, the function crosses through 0 normally. If the order is 2, the function touches 0 and bounces back (e.g. y = x2 when x = 0). If the order is 3 or more, the function hugs zero closely before leaving.
If the function L(E, s) has a zero of order r at s = 1, it means:
L(1) = 0
L‘(1) = 0 (the first derivative is also zero)
L”(1) = 0 (the second derivative is also zero)
… continuing through the (r-1)th derivative
But Lr(1) ≠ 0 (the r-th derivative is not zero)
The conjecture states that this order r equals the rank of the elliptic curve.
So if the L-function has a zero of order 2 at s = 1, the curve should have rank 2 — meaning it has infinitely many rational points that can be generated from 2 independent base points (like P and Q earlier).
While the rank is generally the most interesting part of the conjecture, the full version goes further to provide an exact formula for how the function behaves when s = 1.
Here’s the conjecture in mathematical terms:
The terms on the right side represent different properties of the curve:
— called the regulator, it measures how spread out the rational points are
— the Shafarevich-Tate group, which measures how much the curve ‘cheats’ by having solutions that look real but aren’t (this is a very hard part to calculate)
— factors related to the shape and size of the curve.
In effect, the right side of the conjecture is about analysis because it’s concerned with the analytic property of the L-function at s = 1.
The left side of the conjecture is about algebra and geometry because it depicts the rank of the elliptic curve.
Mathematically, these are such different types of objects that proving they’re always equal is extraordinarily difficult.
There’s currently no algorithm that’s guaranteed to find the rank of an arbitrary elliptic curve.
Mathematicians can find some rational points and make educated guesses but proving “that’s all of the points” or that “these points will generate all the rest” is very difficult.
The L-function is defined as an infinite product over all prime numbers.
Proving that it even converges to a particular value or that it behaves in a predictable way requires some heavy-duty mathematics.
While mathematicians know that counting the number of solutions an elliptic curve equation has modulo p can determine the structure of rational solutions, they don’t know why.
This is called the local to global principle and it’s an unsolved problem in its own right.
Mathematicians have proven the conjecture for specific families of elliptic curves — but proving it for all possible elliptic curves requires many techniques that mathematicians don’t even possess.
It’s like finding that the number of ways you can rearrange furniture in your house is secretly determined by the prime factorisation of your door number. You could check millions of houses and see the pattern holds, but why would such different things be related?
And how do you prove that this must always be true?
This is why the Birch and Swinnerton-Dyer conjecture remains unsolved.
Bryan Birch (left) and Peter Swinnerton-Dyer. Credit: William Stein and Renate Schmid
Elliptic curves are a backbone of modern security. They’re used to secure websites, cryptocurrency transactions, app-based messaging, and so forth.
Remember that ‘adding’ two rational points P and Q could lead you to a third rational point R? Elliptic curve cryptography exploits this fact.
Choose a public elliptic curve, i.e. an elliptic curve whose equation is public, and a point G on it.
Pick a random secret number k — your private key.
Compute k.G, i.e. add G to itself k number of times. Let’s call the result Q. This is your public key.
As with all cryptography, you can share the public key (Q) but you must protect the private key (k).
Given G and Q, the task of finding k is called the elliptic curve discrete logarithm problem.
Even extremely powerful computers struggle to crack it. There’s no known efficient algorithm to solve it.
This is why understanding the distribution of rational points on elliptic curves is the foundation of how we’re keeping secrets in the digital age.
The same difficulty that makes the conjecture so hard to solve is what makes elliptic curve cryptography secure.
Mathematicians have proven the conjecture for when the rank is 0 or 1 and only for certain curves. For rank 2 or higher and for all curves, the Birch and Swinnerton-Dyer conjecture remains one of the greatest unsolved problems in mathematics.