Non-ergodicity and diversity

Ergodicity is the condition wherein a sample is representative of the whole vis-a-vis some statistical parameter. An ergodic system is one that visits all possible states of its existence as it evolves. Axiomatically, a non-ergodic system is one that does not. Stuart A. Kauffman, a scientist at the University of Calgary, wrote on Edge a year ago:

… the evolution of life in our biosphere is profoundly “non-ergodic” and historical. The universe will not create all possible life forms. This, together with heritable variation, is the substantial basis for Darwin, without yet specifying the means of heritable variation, whose basis Darwin did not know.

This is a very elegant description of history that employs a dynamism one commonly encounters in physics and the language of physics. If the past encapsulated everything that could every happen, it would be an uninteresting object of study because its peculiarities would all cancel out, leaving a statistical flatland in its wake. Instead, if the past contained only a specific set of events connected to each other in unique ways – i.e. exhibiting a distinctly uncommon variation – then it becomes worthy of study, as to why it is what it is and not something else. As Kauffman says, “Non-ergodicity gives us history.”

Though today I know that the concept is called ‘non-ergodicity’, I encountered its truth in a different context many years ago: when I had written an article that appeared in Quartz about how Venus could harbour life and that that should encourage us to look for life on Titan as well. I had quoted the following lines from a 2004 paper to strengthen my point:

The universe of chemical possibilities is huge. For example, the number of different proteins 100 amino acids long, built from combinations of the natural 20 amino acids, is larger than the number of atoms in the cosmos. Life on Earth certainly did not have time to sample all possible sequences to find the best. What exists in modern Terran life must therefore reflect some contingencies, chance events in history that led to one choice over another, whether or not the choice was optimal.

Somehow, and fortunately, these lines have stayed with me to this day four years on, and I hope and believe they will for longer. They present a simple message whose humility seems only to grow with time. They suggest that even life on Earth may not be the best (e.g. most efficient) it can be after billions of years of evolution. Imagine the number of evolutionary states that the whole universe has available to sample – the staggeringly large product of all the biospheres on all the planets in all the time…

The search for a ‘perfect lifeform’ is not a useful way to qualify humankind’s quest. Against such cosmic non-ergodicity, every single alien species we discover could, and likely will, stand for its own world of contingencies just as peoples of different cultures on Earth do. Perhaps then our xenophobia will finally become meaningless.

A new discrimination

An article in KurzweilAI begins,

Plants could soon provide our electricity.

Why would anyone take this seriously? More than excitement, this line rouses a discerning reader to suspicion. It is bound to be centred on the word “soon”, implying in the near-future, imminently. You’re not sure which timescales people are thinking on but I’m sure we can agree 10 years sounds reasonable here. Will plants power your home in 10 years? Heck, in 50 years? It is stupendously unlikely. The suggestion itself – as embodied in that line – is disingenuous because it 1) overestimates feasibility at scale and 2) underestimates the amount of conviction, work and coordination it will take to dislodge the fossil-fuel, nuclear and renewable energy industries.

Indeed, the line that “plants could soon provide our electricity” begins to make sense only when its words are assessed individually instead of being beheld with the seductive possibilities the whole sentence offers. Could? Of course, they already do through the technology described in the article, called Plant-e. Plants? I don’t see why not; they are batteries of sorts, too. Provide? Plants are terrestrial, ubiquitous, very accessible, well understood and seldom dangerous. Our? Who else’s is it, eh. Electricity? Again, Plant-e has demonstrated this already, in the Netherlands, where it was pioneered. But cognise the sentence as a whole and you’re left with gibberish.

The article then claims:

An experimental 15 square meter model can produce enough energy to power a computer notebook. Plant-e is working on a system for large scale electricity production in existing green areas like wetlands and rice paddy fields. … “On a bigger scale it’s possible to produce rice and electricity at the same time, and in that way combine food and energy production.”

The emphasised bit (my doing) sounds off: it implies a couple dozen kilowatt at best, whereas the article’s last line says, “In the future, bio-electricity from plants could produce as much as 3.2 watts per square meter of plant growth.” Either way, a solar panel with a tenth of the surface area produces about 250 W (comparable to the first claim and improving, 10x better than the second claim). People around the world are already concerned that the world may not have enough nickel, cadmium and lithium to build the batteries to store this energy and may not have enough land to build all the solar cells necessary to “provide our electricity”.

In this scenario, why should anyone give a fuck about Plant-e as an alternative worth one’s time? It is interesting and exciting that scientists were able to create this technology but its billing as a reasonable substitute for the commonly known sources of energy, and “soon”, suggests that this is certainly hype, and that the people behind this announcement seem to be okay with disguising an elitist solution as a sustainable one.

Second, said billing also suggests that there is less certainly – but plausibly – a misguided, white-skinned belief at work here, that, notwithstanding details about intraday variability of power generation, soil conditions and such, agriculture and power consumption in the Netherlands are both similar to those elsewhere in the world. But the social, economic and technological gap between these endeavours as they happen in Northwest Europe and Southeast Asia is so large as to suggest the article’s authors have no clue about the socioeconomics of electric power or are at ease with the wilful disregard of it.

Announcements like this don’t harm anyone but they certainly offend the sensibilities of those forced to grow, grow, grow while on the brink of the worst of climate change. It is crucial that we keep innovating, finding new, better, more considerate ways of surviving impending disasters as well as reducing our deleterious footprint on this planet. Let us do this without suggesting that a nascent, untested (at scale) and currently infeasible technology may provide a crucial part of the answer where numerous other governments have failed.

Through this exercise, let us also awaken our minds to a new form of discrimination in the Anthropocene epoch – lazy, short-sighted, selfish thinking – and call it out.

Preprints don’t promote confusion – so taking them away won't fix anything

In response to my Twitter thread against Tom Sheldon’s anti-preprints article in Nature, I received more responses in support of Sheldon’s view than I expected. So I wrote an extended takedown for my blog and, of course, The Wire, pasted below.

In 1969, Franz J. Ingelfinger articulated a now-famous rule named after him in an attempt to keep the New England Journal of Medicine (NEJM), which he edited at the time, in a position to give its readers original and fully vetted research. Ingelfinger stated that the NEJM wouldn’t consider publishing a paper if it had already been publicised before submission or had been submitted to another journal at the same time. The Ingelfinger rule symbolised a journal’s attempt to recognise its true purpose and reorganise its way of functioning to stay true to it.

Would we say this is true of all scientific journals? In fact, what is a scientific journal’s actual purpose? First, it performs peer review, by getting the submissions it receives scrutinised by a panel of independent experts to determine the study’s veracity. Second, the journal publicises research. Third, it creates and maintains a record of the section of the scientific literature it is responsible for. In the historical context, these three functions have been dominant. In a more modern, economic and functional sense, scientific journals are also tasked with making profits, improving their impact metrics and making research more accessible.

As it happens, peer review is no longer the exclusive domain of the journal – nor is it considered to be an infallible institution. Second, journals still play an important part in publicising research, especially via embargoes that create hype, pointing journalists towards papers that they might otherwise not have noticed, as well as preparing and distributing press releases, multimedia assets, etc. Of course, there are still some flaws here. And third, the final responsibility of maintaining the scientific record continues to belong to the journal.

Too much breathing space

Pressures on the first two fronts are forcing journals to stay relevant in newer ways. A big source of such pressure is the availability of preprints – i.e. manuscripts of papers made available by their authors in the public domain before they have been peer-reviewed.

Preprint repositories like arXiv and biorXiv have risen in prominence over the last few years, especially the former. They are run by groups of scientists – like volunteers pruning the garden of Wikipedia – that ensure the formatting and publishing requirements are met, remove questionable manuscripts and generally – as they say – keep things going. Scientific journals typically justify their access cost by claiming that they have to spend it on peer review and printing. Preprints evade this problem because they are free to access online and are not peer-reviewed the way ‘published’ papers are. In turn, the reader who wishes to read the preprint must bear this caveat in mind.

This week, the journal Nature published a (non-scientific) article headlined ‘Preprints could promote confusion and distortion’. Authored by Tom Sheldon, a senior press manager at the Science Media Centre, London, it advanced a strange idea: that bad science was published in the press because journalists did not have “enough time and breathing space” to evaluate it. While Sheldon then urges scientists “to be part of these debates – with their eyes open to how the media works” – the more forceful language elsewhere in the article suggest that preprints should go and that that will fix the problem.

There are numerous questionable judgments embedded here. Principal among them is that embargoes are the best way to do it – and this may seem obvious from the journal’s point of view because an embargo functions like a pair of blinders, keeping a journalist focused on a journal-approved story, and reminding her that she must contact a scientist because a deadline is approaching after which all publications will ‘break’ the story. Of course, embargoes aren’t the norm; the Ingelfinger rule says that the journal will be responsible for ensuring that whatever it publishes is good-to-go.

But with a preprint, there are no deadlines; there are no pointers about which papers are good or bad; and there is no list of people to contact. The journal fears that the journalist will fumble, be overcome with agoraphobia and, as Sheldon writes, “rushing to be the first to do so … end up misleading millions, whether or not that was the intention of the authors.”

It is obvious that the Ingelfinger + embargo way of covering research will produce more legitimate reportage more often – but these rules are not the reasons why the papers are reported the way they are.

High-profile cases in which peer-review failed to disqualify bad and/or wrong papers and papers’ results being included in the scientific canon only for replication studies to completely overturn them later are proof that journals, together with the publishing culture in which they are embedded, aren’t exactly perfect.

Some scientists have even argued that embargoes should be done away with because the hype they create often misrepresents the modesty of the underlying science. Others focused their attention on universities, which often feed on the hype created by embargoes to pump out press releases making far-fetched claims about what the scientists on their payrolls have accomplished.

In turn, journalists have been finding that good journalism is the outcome only when good journalism is also the process. Give a good journalist a preprint to work with and the same level of due diligence will be applied. Plonk a bad journalist in front of an embargoed news releases and a preprint, and you will only get shoddy work both times. It is not as if journalists suspend their fact-checking process when they work with embargoed papers curated by journals and reinstate it when dealing with preprints. A publication that covers science well will quite likely cover other subjects with the same regard and sensitivity not because of the Ingelfinger rule but because of the overall newsroom culture.

Last line of defence

Moreover, an infuriating presumption in the Nature article is that the preprint flows as if by magic from the repository where it was uploaded into the hands of the “millions” now misled by it. Indeed, though it is annoying that the phrasing makes no room for a functional journalist who can step in, write about the paper and arrange for it to be publicised, it is simply frustrating that the journalistic establishment remains invisible to Sheldon’s eye even when we’re talking about an extra-journal agent messing up along the way.

It is the product of this invisibility – rather, a choice to not acknowledge evident work – that suggests to the scientific journal that it must take responsibility for ensuring all that it publishes is good and right. As a pathway to accruing more relevance, this can only be good for the journal; however, it is also a way to accrue more power, so it must not be allowed to happen. This is ultimately why taking preprints away makes no sense: journals must share knowledge, not withhold it.

By taking preprints away from journalists, Sheldon proposes to force us to subsist on journal-fed knowledge – knowledge that is otherwise impossible to access for millions in the developing world, knowledge that is carefully curated according to the journal’s interests and, most of all, pushing the idea that the journal knows what is best for us.

But journals are not the last line of defence, even though they would like to think so; journalists are. That is how journalism is structured, how it functions, how it is managed as a sector, how it is perceived as an industry. If you take control away from journalists to move beyond papers approved by a journal, we lose our ability to question the journal itself.

The only portion of the Nature article that elicits a real need for concern is when Sheldon refers to embargoes as a means of safeguarding novelty for news publishers. He quotes Tom Whipple, science editor of The Times, saying that it is impossible to compete with the BBC because the BBC’s army of reporters are able to pick up on news faster. The alternative, he implies, is to preserve embargoes because they keep the results of a paper new until a given deadline – letting journalists from publishers small and large cover it at the same time.

In fact, if it is reform that we are desperate for, this is the first of three courses of action: to keep removing the barriers instead of making access more equitable. The second is to fix university press releases. The third is to stop interrogating preprints and start questioning publishing practices. For example, is it not curious that both Nature and NEJM, as well as many other ‘prestigious’ titles, rank almost as highly on the impact index as they do on the retraction index?

Update: The following correction was made to the Nature article on July 25 (h/t @kikisandberg). I guess that’s that now.

Screen Shot 2018-07-30 at 07.16.31

A detector for electron ptychography

Anyone who writes about physics research must have a part of their headspace currently taken up by assessing a new and potentially groundbreaking claim out of the IISc: the discovery of superconductivity at ambient pressure and temperature in a silver nanostructure embedded in a matrix of gold. Although The Hindu has already reported it, I suspect there’s more to be said about the study than is visible at first glance. I hope peer review will help the dust settle a little, but we all know post-publication peer-review is where the real action is. Until then, other physics news beckons…

Unlike room-temperature superconductivity, odds are you haven’t heard of ptychography. In the field of microscopy, ptychography is a solution to the so-called phase problem. When you take a selfie, the photographic sensor in your phone captures the intensity of light waves scattering off your face to produce a picture. In more sophisticated experiments, however, information about the intensity of light alone doesn’t suffice.

This is because light waves have another property called phase. When light scatters off your face, the phase change doesn’t embody any useful information about the selfie you’re taking. But if physicists are studying, say, atoms, then the phase change can tell them about the distribution of electrons around the nucleus. The phase problem comes to life when microscopes can’t capture phase information, leaving scientists with only a part of the picture.

Sadly, this constraint only exacerbates electron microscopy’s woes. Scientists in various fields use electron microscopy to elucidate structures of matter that are much smaller than the distances across which photons can act as probes. Thanks to their shorter wavelength, electrons are used to study the structure of proteins, the arrangement of atoms in solids and even aid in the construction of complex nanostructure materials.

However, the technique’s usefulness in studying individual atoms is limited by how well scientists are able to focus the electron beams onto their samples. To achieve atomic-scale resolution, scientists use a technique called high-angle annular dark-field imaging (ADF), wherein the electrons are scattered at high angles off the sample to produce an incoherent image.

For ADF to work better, the electrons need to possess more momentum, so scientists typically use sophisticated lenses to adjust the electron beam while they boost the signal strength to take stronger readings. This is not desirable. If the object of their study is fragile, the stronger beam can partially or fully disintegrate it. Thus, the high-angle ADF resolution for scanning transmission electron microscopy has been chained to the 0.05 nm mark, and going up to 0.1 nm for more fragile structures.

Ptychography solved the phase problem for X-ray crystallography in 1969. The underlying technique is simple. When X-rays interact with a sample under study and return to a detector, the detector produces a diffraction pattern that contains information about the sample’s shape.

In ptychography, scientists iteratively record the diffraction pattern obtained from different angles by changing the position of the illuminating beam, allowing them to compute the phase of returning X-rays relative to each other. By repeating this process multiple times from various directions, scientists will have data about the sample that they can reverse-process to extract the phase information.

Ptychography couldn’t be brought to electron microscopy straightaway, however, because of a limitation inherent to the method. For it to work, the microscope has to measure the diffraction intensity values with equal precision in all the required directions. “However, as electron scattering form factors have a very strong angular dependence, the signal falls rapidly with scattering angle, requiring a detector with high dynamic range and sensitivity to exploit this information” (source).

In short, electron microscopy couldn’t work with ptychography because these detectors didn’t exist. As an interim solution, in 2004, researchers from the University of Sheffield developed an algorithm to fill in the gaps in the data.

Then, on July 18, researchers from the US reported that they had built just such a detector (preprint), which they called an “electron microscope pixel array detector” (EMPAD), and claimed that they had used it to retrieve images of a layer of molybdenum disulphide with a resolution of 0.4 Å. One image from their paper is particularly stunning: it shows the level of improvement ptychography brings to the table, leaving the previous “state of the art” resolution of 1 Å achieved by ADF in the dust.


The novelty here isn’t that the detector is finally among us. The same research group (+ some others) had announced that it had built the EMPAD in 2015, and claimed then that it could be used for better electron ptychography. What’s new now is that the group has demonstrated it.

a) Schematic of STEM imaging using the EMPAD. b) Schematic of the EMPAD physical structure. The pixelated sensor (blue) is bump-bonded pixel-by-pixel to the underlying signal processing chip (pink). Source:
a) Schematic of STEM imaging using the EMPAD. b) Schematic of the EMPAD physical structure. The pixelated sensor (blue) is bump-bonded pixel-by-pixel to the underlying signal processing chip (pink). Source:

According to their 2015 paper, the device

consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning.

For the molybdenum disulphide imaging test, the EMPAD had 128 x 128 pixels, operated in the 20-300 keV energy range, possessed a dynamic range of 1,000,000-to-1 and with a readout speed of 0.86 ms/frame. The scientists also modified the ptychographic reconstruction algorithm to work better with the detector.

Redshift and eclipse

I am thoroughly dispirited. I had wanted to write today about how it is fascinating that we have validated Einstein’s theory of general relativity for the first time in an extreme environment: in the neighbourhood of a black hole. The test involved the detection of an effect called the gravitational redshift, whereby light that is moving from a region of higher to lower gravitational potential appears redshifted. In other words, light seen moving from an area of stronger gravitational field to an area of weaker gravitational field appears to be redder than it actually is, if the observer is sufficiently far from the source of this field. The observation of this redshift is doubly fascinating because it is also an observation of time dilation in action.

The European Southern Observatory’s Very Large Telescope (VLT) took the initiative 26 years to make this check; it was completed and announced yesterday, July 25. The source of the gravitational potential was the black hole at the Milky Way’s centre, called Sagittarius A*, and the source of starlight, a stellar body known only as S2. Triply fascinating is the fact that the VLT observed S2 swinging by Sgr A* at a searing 25 million km/hr. Phew!

But through this all, I am distressed because of an article I spotted a few minutes ago on NDTV’s website, about how we must not eat certain foods during a lunar eclipse – given the one set to happen tomorrow – because they could harm us. I thought we had been able to go a full day without a mainstream publication spreading pseudoscientific information about the eclipse, but here we are. I weep for many reasons; right now, I weep most of all not for the multitude of quacks we inhabit this country with but for Yash Pal. And I wish that, like S2, I can escape this nonsense at 3% of the speed of light when it becomes too much.

Silly arguments to restrict access to preprints

Tom Sheldon, a senior press manager at the Science Media Centre (SMC), London, had an interesting proposition – at least at first – published in Nature on July 24. The journal’s Twitter handle had tweeted it thus: “Do you think publishing on preprint servers is good or bad for science?” Though the question immediately set off alarm bells, I thought that like many news reports and magazine features these days, perhaps the tweet was desperate to get a reader interested in what was going to be a more nuanced argument. I was wrong.

Though the following line comes much farther down the article, it deftly encompasses its central – and misguided – animus: “How can we have preprints and support good journalism?” There is no contradiction here but, as we’ll see, there is a strong reflection of Nature‘s own tendency to publish ideas for their glamour.

As much as I’d welcome changes to the preprint ecology that would make it always easier for journalists to report on a paper, it’s not the preprint’s fault if a story is found to be wrong or misleading. Such a thing could only be because the journalist hasn’t done their due diligence, especially in mid-2018, when the pursuit of truth(s) has been gripped by post-truth, fake news, false balance, discussions on the “view from nowhere”, etc. For example, Sheldon writes,

Imagine early findings that seem to show climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.

A lazy journalist will be lazy. A bad journalist will misrepresent a paper if they have to. A good journalist will stop to check, especially if they are aware that climate change carries a 95% consensus among scientists and that vaccines have often been misreported on in the recent past. Such awareness (what many in India would call ‘general knowledge’) can go a long way. It is why journalists are expected to consume the news as much as they would like to be involved in producing it. And preprints are not going to improve or worsen this situation.

Sheldon continues,

I’ll admit that we do not yet have examples of harm from such stories, but this is probably because — at the moment — only a tiny fraction of preprints cover health-related or controversial fields.

I’m not sure how true this is. More importantly, journalists constantly misrepresent even peer-reviewed papers, on health or other subjects. Quoting @avinashtn: “Andrew Wakefield‘s MMR vaccine ‘study’ was peer-reviewed, as was arsenic-based DNA. Nature is being disingenuous because preprints will hurt them.” Tell you what, I buy it because it is eminently possible.

It is also funny here that, in its pursuit to be seen as both selective and recognised as an identifier of paradigm-defining research, Nature often publishes research papers that are more spectacular than accurately representative of science as it is. To quote Björn Brembs, a professor of neurogenetics at the Universität Regensburg, Bavaria, and an important voice in the global open access movement, “Nature is among the group of journals which stand out as publishing the least reliable science” – elaborated here and here.

I also appreciate Sheldon’s reaching out (as in the excerpt below) to journalists but I don’t get the part where it is implied that embargoes give journalists more time to prepare for a story than do preprints. How is that? And this is a problem only if someone is restricting access to or preventing journalists from soliciting and publishing independent comment, and which Sheldon admits to in a different context in his article. (Ivan Oransky writes in a just-published Embargo Watch post that this is called a “close-hold embargo”, used to encourage stenography.)

It is not enough to shrug and blame journalists, and it is unhelpful to dismiss those journalists who can accurately convey complex science to a mass audience. Scientists need to be part of these debates — with their eyes open to how the media works. Journalists do include appropriate caveats or even decide not to run a story when conclusions are tentative, but that happens only because they have been given enough time and breathing space to assess it. If the scientific community isn’t careful, preprints could take that resource away.

It would be best for everyone involved (although not the journals) if we set aside preprints and fixed university press releases instead. Peer-review doesn’t always point to good science and journals aren’t the only ones to perform it. Other scientists do it in the open through post-publication review and journalists do it by enlisting independent scrutiny.

If there is any concern that preprints are less legitimate than papers published by scientific journals (post-peer-review) are: I think we can all agree that, most often, peer-review isn’t the first time a scientist shows their paper to an independent expert, and that ‘updating’ preprints – at least on arXiv – is something scientists wants to avoid; it may not be as bad as issuing a correction to or retracting a published paper but it carries its own implications. As a result, it should be okay for scientists to issue press releases, or simply just notifications, along with their preprints, and for organisations like the SMC, where Sheldon works, to make it easier for journalists to reach out to independent experts.

The part I found the most convincing about Sheldon’s argument is this:

Another risk is the inverse — and this one could matter more to some researchers. Under the preprint system, one intrepid journalist trawling the servers can break a story; by the time other reporters have noticed, it’s old news, and they can’t persuade their editors to publish.

There have been cases in which a preprint that garnered news stories got a second flurry of coverage when it was published in a journal. But generally, the rule is ‘it has to be new to be news’. Reacting to our blog, Tom Whipple, science editor of The Times in the United Kingdom, tweeted: “I’m not sure how to keep a newspaper in profitable existence that decides to give people news they’ve already read on the BBC.”

… but I have questions still. Dear editors, who are you competing with and why? Is the BBC getting everything right? Is the BBC even covering everything from all angles? I feel like there’s some context missing here. A cursory search on Google for Whipple’s comment only turned up Sheldon’s article in Nature. I doubt Whipple’s entire comment was that single line because, off the top of my head, it anticipates one of only two ways ahead: scale or go niche. The latter is much more effective as a strategy to take on the BBC with.

Edit, July 27: Whipple’s Twitter thread about BBC and embargoes in general is here. There’s also a Tom Chivers tweet in there that I largely agree with – as does Whipple – and which makes a point somewhat similar to mine, which is that even if journalists are finding it harder to compete with each other, taking away preprints isn’t going to fix anything. //

The bigger point is to not throw the baby out with the bathwater – to not push an issue that has demonstrably minimal late-downstream effects back upstream, where the given solution (of preprints) is working perfectly fine. But if you’re considering taking away preprints because incompetent journalists are screwing up, the problems as you’re perceiving them are going to get a lot worse – in ways too numerous, and too obvious, to delineate here. More access is always better.

The Tooth

I went to the most terrifying place in the world today: the dental clinic. I’d woken up this morning with a sharp pain under my right lower jaw and, soon enough, I realised it was time to get rid of the wisdom tooth – a divorce I’d been putting off for a few months for fear of the pain. I’d had five teeth extracted as a kid about 15 years ago, and the last of these teeth had been plucked out shortly after the local anaesthetic had stopped working. The trauma of that incident has stayed with me, and resurfaced in full glory this morning.

I made an appointment via Practo at a clinic nearby – the reviews seemed nice – and got there at 12 pm. I met with Dr B, who seemed really nice and didn’t offer any gratuitous advice when I told her I smoke. I liked her immediately. We took a quick X-ray and I was told that my wisdom tooth on the right had to go, and right away because an infection had developed around it. I told Dr B about my traumatic experience having teeth pulled. She promised me she’d keep it completely painless. And she did.

But where she failed – and where most doctors I think would fail – is in making her patient feel less dehumanised. As soon as the X-ray was taken, she began to correspond with another doctor in the room in hushed tones about what was going on with my tooth. Their dialogue was speckled with strange terms, and I couldn’t tell the difference between when they were talking about my teeth and when they were talking about the shape of my jaw. But I surmised it wasn’t looking good.

I had to interject repeatedly to ask what the X-ray was showing. If I didn’t ask, they wouldn’t bother. Even when the extraction procedure was about to begin, I was asked to recline, various implements were thrust into my mouth and a nurse stood on standby. “If you want me to stop for any reason, just raise your hand,” Dr B said. Just as she was about to poke a pointy thing into my mouth, I raised my hand. She was surprised. I asked what it is that they were going to do. She answered, and then it began.

I learnt later that my tooth’s roots were strong, so the damned thing had to be broken up first and then removed piece by piece. The procedure 15 years ago had involved just one implement – a tool I’ve always called the Motherfucker. Dr U had plunged with it into my mouth, used it to wrangle with the misbehaving tooth and, after a few seconds, pull it out. This time, with Dr B, the motherfucker only showed up 45 minutes after we’d started, and in two avatars. Motherfucker I was the cow horns #23 forceps and Motherfucker II was somewhat like a lower anterior forceps.

We had started off at 12.10 pm with two syringes of a local anaesthetic, topping it off an hour later with a third. I was told the one side of my mouth would go numb. “You won’t feel any pain, you will just feel the pressure of my hands as I’m working,” Dr B said. But somewhere after the third syringe, I lost any ability to tell apart pain and pressure. I was lost in my head, flipping through scenes from old Tamil movies looking for anything with a dentist in it. Nothing. Annoyingly enough, the scene that showed up most vividly, and repeatedly, in my mind was Andy Serkis singing ‘Don’t hurt me’ to Martin Freeman, a scene from Black Panther.

Around 1.15 pm, Dr B stepped away from my face, shaking her head in exasperation. Her colleague stepped closer, asking what had happened, while the nurse – who was also the cleaning lady at the clinic – stepped closer to peer into my mouth, a big smile on her face. Dr B said then, “This is a bone-cutting case.”


The fuck.

Did you just say?

As it is, I have very little idea about whatever is going on. The grotesque zircon-tipped tools passing in and out of my mouth aren’t helping me calm down. (One of them, called a Couplands elevator, is what I’m going to call the Little Motherfucker.) The doctor in general doesn’t feel compelled to tell her patient what it is that she’s doing, leaving me to guess for myself. And the one thing that’s said out loud, sans any prompt, is that I’m a “bone-cutting case”. Wonderful. Obviously, right then, I couldn’t stop thinking about The Bone Collector.

After a new set of tools had been assembled and Dr B bent down to inspect the tooth, I raised my hand. She looked at me, I smiled, she smiled back, and explained: “There’s a hard layer of bone around the tooth that I’ll have to cut before I can extract the tooth.” I nodded in satisfaction. The nurse swiftly introduced a suction pipe and began to drip saline solution from a needle onto the tooth, Dr B planted wads of cotton in my cheek and placed a bite my teeth on the left, and we got started again – this time, with a drill called Bone Cutter (by everyone). As it raged against my tooth with noises like R2D2 was being tortured, it felt like the industrial revolution was happening inside my skull, replete with the Kafkaesque style of oppression.

At the end of two hours, my tooth had been chipped into four pieces, each then scraped-and-plucked out in a bloody mess. I don’t know if I was billed for the gloves but I knew they had been changed thrice. Dr B’s hands were trembling as she sutured the wound. Once she was done, her colleague patted her on the back with a triumphant smile. “Well done,” she said, “you handled the case very well.” The case wasn’t pleased to hear this but was glad that it was over all the same.

Doctor-to-patient communication plays an important role in reminding physicians that their wards are people just like themselves. When it isn’t there, it signals that the doctor doesn’t think the patient needs to know. This in turn makes it harder for people to make decisions, and more generally retain their sense of agency because they don’t have the information necessary to act rationally (from the doctor’s POV). Another way this problem reared its head today was in the form of pain. Most of the time, Dr B would heed my raised hand and pause for a minute or so, but every now and then, when she was nearing the end of a step of action, my raised hand would only draw a “Just hang in there”. So the trauma from 15 years ago hangs in there, too.

The silver lining is that I will likely not have to undergo this hell again.

The surprises

“Nature is Lovecraftian” … is it? The literature of H.P. Lovecraft in freaky, at odds with the more conventional, less morally degenerate canon of English literary fiction irrespective of the period from which the latter is selected. To say “nature is Lovecraftian” is to extend to zoologia the out-of-place characteristic we associate with Lovecraft’s characters and their plots. This is not fair. Many would think to say “nature is Lovecraftian” is to profoundly underestimate what an animal is capable of because underestimation is the source of Lovecraft’s surprise, and they would miss the point. Nature surprises us not because we usually expect very little from it but because we continually choose to obsess over the “more” normal, whatever that is, and sideline the existence of the “less”. It is not the surprise of underestimation – which fiction has always been better at mustering – but the surprise of ignorance.

Of course, given two sets N and A (for normal and abnormal behaviour), N will always be larger than A by definition. That is also how we would approach surprise: to find a member of A in N. One could argue that nature is indeed Lovecraftian because it also abides by the rule that N > A. However, I would disagree in that we have no means to prove this because, beyond statistical considerations and even subjectivity, nature is herself the architect of the human sense of beauty. We can only find in nature that which we are already looking for; our composition of the sets N and A will be guided by nature’s hand. Instead, it might be more fruitful to escape the biases of the human condition – perhaps by taking recourse through the scientific method – and arrive at the inevitable conclusion that animals do what they have to do to get their meal, and that we frequently only encounter and remember those that go about their lives in routines that we approve of.

It is no mistake whatsoever to attempt to literalise nature in terms of anthropocentric qualities, but it might be one to liken our relationship with nature to the human psyche’s definitive relationship with Lovecraft’s stories. Perhaps… is nature Doylean?

‘Black Panther’ – Two thoughts

I watched Black Panther again today. Two things came to mind.

First: When by the end of the film T’Challa and Wakanda realise that they can’t keep their technology a secret anymore, it is – among many things – an act of taking charge of their nation’s narrative. By doing so, T’Challa and his advisers ensure that others may not tell Wakanda’s story in a way the state does not wish to be told. This is valuable advice, especially for the Indian Space Research Organisation (ISRO). This organisation’s exploits are blown out of proportion more often than any other governmental institution in India, except perhaps the Army’s. However, ISRO’s dysfunctional public outreach enterprise has never raised a finger against those who would misrepresent its activities or intentions. It must do so, and take charge of the narrative so that those less informed don’t.

Second: In the first half of the film, Erik Stevens (later, N’Jadaka) casually reveals that he has spiked the coffee being sipped by the curator of a museum in London. In most English and Tamil films till date, the nitty-gritty of heists are spelled out to the audience by featuring the scenes in which each step of the heist was performed. However, Black Panther doesn’t bother, probably because it is not a heist film but largely because it could bank on its audience to piece together what might have happened, and for which it could thank all the heist films released thus far (esp. the Ocean’s trilogy). From my POV, the film used ‘tell’, not ‘show’ – which is, coming from a journalist, a bad way to write a story – to good effect. The first Tamil film I saw that was similarly innocuous about the details of its caper was Aayirathil Oruvan (‘One man in a thousand’), 2010.

Gallium’s dance

A wide swath of fundamental physics and chemistry is defined by the pursuit of the ground state. Since we began elucidating the structure of the atom in the 1910s, much of what we know about how particles, quasiparticles and molecules behave can be described by a desire among each of these entities to lose all the energy that they can in the given circumstances and simply exist with the bare minimum. This is why the ground state is both an illuminating and fascinating object of study: the former because it is what particles are always tending towards and the latter because it is the ultimate destiny of the ergic constituents of our world, symbolising a kind of particulate amor fati. The ground state is the home to which all matter seeks to return; by studying the home and the forces that keep it, we can explain to a large extent the nature of the things that want to return there.

“The ground state is interesting because small excitations above it are what we effectively mean by (quasi)particles,” says Madhusudhan Raman, a theoretical physicist. “That is, when you find the right variables in which to study small excitations of the ground state, you have understood your physical system perturbatively.”

For example, the electrons around an atomic nucleus are forbidden from occupying the same… the same what? “Might I suggest an analogy?” Raman butts in. “Electrons are like home-owners: they may live in the same town, or even on the same street, but no two electrons live together. That is the exclusion principle.” And all the electrons in an atom are concentric vis-a-vis the atomic nucleus. By asking why they would do this when they could all simply journey around the nucleus in an orbit that affords them the lowest energy possible, we come upon the work of Wolfgang Pauli, Paul Dirac, Enrico Fermi, among others. By wondering if other particles in other systems are subjugated similarly, we come upon the work of Satyendra Nath Bose and Albert Einstein, among others.

Why, fast-forward to 2011, when the Higgs boson was discovered because the unstable amount of energy it embodied ‘decayed’ into clumps of lighter, long-lived and so more observable particles. If particles didn’t behave this way, the Large Hadron Collider would be completely useless – nor would we have had last week’s exciting blazar neutrino discovery.


A choice example from the realm of physics is the superconductor, which – as we all know – is a material that can conduct an electric current with zero resistance. One way to explain this phenomenon is by taking recourse through BCS theory, which imagines the electrons in a superconductor to have joined up in specific circumstances to form so-called Cooper pairs, effectively getting transformed from being fermions of higher energy to bosons of lower energy. And bosons are exempt from Pauli’s exclusion principle, free to form a phase of matter called the condensate at low temperatures. This condensate sea of electrons is what conducts the electricity. “This, incidentally, is a good example of finding the right ground state,” Raman said.

Superconductors are comparable to time crystals, a hypothetical crystal whose particulate constituents would be in motion in their ground state. The principle difference between them is that time crystals exhibit the spontaneous breaking of time-translation symmetry (as explained here), whereas superconductors don’t. However, superconductors are still cooler – not least for their befuddling variety and their involvement in kooky experiments to uncover anomalous quantum effects and ‘artificial’ particles.

Unfortunately, all these materials and their properties are very difficult to engineer and then observe in action. In most cases, the observation itself consists of watching numbers on a screen or reading pre-recorded data. Compare this to how exciting it would be to observe an object oscillating between either sides of its ground state in a classical setting. Of course, this also would be hard to engineer because the object would have to act against gravity, which takes a lot of work, which in turn takes a lot of energy (think of Newton’s cradle). Perhaps it can be made to work if we went just a little smaller, unto a scale where the object is heavy enough to be affected by gravity but also light enough to be affected by one of the other fundamental forces, preferably in the form of a controllable electrochemical reaction.

This is somewhat the case with the mercury heartbeat experiment. Place a drop of mercury in a small pool of acid with an iron nail at a short distance from the drop. The acid strips off electrons from the mercury atoms it comes in contact with, ionising them and forcing them to repel each other. This causes the mercury drop to flatten out – and make contact with the iron nail. The nail has enough negative electrochemical potential, i.e. functions as an anode, to deionise the mercury atoms and cause them to pull themselves together again thanks to surface tension. As a result, the drop de-flattens into a sphere-like shape, loses contact with the iron nail and starts the cycle all over again.

Last week, scientists from China announced that they’d done something similar with liquid gallium – but with more interesting effect. They filled a petri dish with sodium hydroxide, placed 50-150 microlitres of liquid gallium at its centre and then set up a graphite fence around it. The fence would act like the nail in the mercury experiment if it was positively charged with a DC current. When the dish was tilted slightly, the gallium flowed down towards the fence and came in contact. Its surface became electrified, i.e. gallium atoms lost electrons to become ions, and the surface tension vanished. As the drop spread out over the incline for more of it to come in contact with the fence, the amount of electrification also increased, eventually causing the liquid and the fence to repel each other so much that the former moved back up the incline – cutting contact, reacting with the base to regain electrons, restoring surface tension and flowing back down the incline.

It is an abusive relationship. The liquid gallium is forced to oscillate between being a droplet at the centre of the dish and being a pancake in contact with the fence, whereas all it would like to do is not have the incline and just lounge on its basic bed. Sadly, it is not going to have its paradise anytime soon because the Chinese team found that the gallium’s ‘heartbeat’ movement up and down the incline could be controlled by the amount of DC current supplied – whereas the mercury’s ‘heartbeat’ throb couldn’t be controlled by the iron nail. In their paper, the Chinese group writes,

A comparatively special feature of the gallium-based liquid oscillator is that the electrochemistry allows the beating to be activated or deactivated just using an applied DC voltage. … Without the applied voltage, the drop docks with the inner side of the electrode due to the electrode inclination. The voltage causes the drop to self-actuate and a stable periodic motion is obtained soon afterward. … The oscillations stop after the voltage is removed. The motion can end abruptly; although in some cases, slower irregular beats persist for a few more cycles after the voltage is removed, indicating stored charges on the drop. Despite some background mechanical vibrations in the apparatus, the liquid metal itself shows a behaviour that is self-correcting and self-regulating, governed by a well-defined characteristic frequency… This indicates that the phenomena occur at a steady-state frequency that is relatively robust against mechanical perturbations.


Numerous reports (listed below) have appeared on the web discussing the potential of this gallium pulse to power robot muscles of the future – which is to reduce the quasi-sublime beauty of what is happening here to the pithiness of a battery, and move on. But don’t move on just yet.

Source: Altmetric
Source: Altmetric

The transmission of forces and the progression of a phenomenon happens faster in the quantum realm than in the classical one. Moreover, the phenomena also seem ‘cleaner’ in that there is no arbitrary, anthropogenic intervention apart from the preservation of certain state variables. For example, maintaining a cryogenic temperature is necessary for a superconductor to come to life – but the performance of superconductivity does not demand continuous human intervention in the form of, say, an iron nail or an electrified graphite fence.

Of course, superconductors are a cherry-picked example, possibly even a flawed one because the line between human intervention and the need to preserve a functionally conducive environment vanishes completely in many other examples. One is the discovery last year of Majorana modes in a topological superconductor: the material in question does not exist in nature, is almost impossible to create by accident and can only be built by an intelligent species. Given this, did the scientists discover the Majorana modes or did they invent them?

Classical examples, on the other hand, don’t present such conundra, at least not as often as their quantum counterparts do. It is easier when gravity lords over the other fundamental forces to tell apart natural occurrences from synthesised ones – just as easily as one can differentiate between greatness and transcendentalism. To illustrate how, consider two objects that behave strangely at or near their ground states: the Cooper pairs condensate and the liquid gallium/sodium hydroxide/graphite ensemble. The condensate has zero viscosity and can keep flowing forever. There is here a deeper alteration of the substance’s nature, so much so that the essence of the sum is not the essence of its parts. But this is not so with liquid gallium, which in comparison is dramatic prose but prosaic nonetheless.

The implicit inferiority of the gallium and mercury examples is further borne out by the introduction of an incline and a nail. The petri dish had to be inclined at some angle to kickstart the experiment, the nail had to positioned at a short distance from the droplet to jumpstart the throbbing. Such considerations are arbitrary – and they’re arbitrary because their precision is inconsequential. Perhaps the petri dish could be tilted at 45º if enough current is supplied to the graphite corral. Perhaps the iron nail could be placed 10 cm away from the mercury if there is enough mercury and enough electrolyte. However, a superconductor just won’t superconduct above its critical temperature, no matter how many electrons are available or what the shape of the material is.

There is a fragility that makes mathematical order easier to summon, and behold, out of the chaos of reality, a sort of principled existence that draws sharper lines between ground states and excited states in a way that gravity never aspires to in its stabler demesne. This is certainly one reason why we choose to be interested in quantum mechanics even as we keep our didactic metaphors in the classical domain. The quantum offers the asymptotically perfect realisation of natural beauty and the classical offers a crude grammar to translate between physics and aesthetics. The ground state, of course, is the pursuit that unites them both.

The Wire
July 21, 2018

That monthly reminder…

(I can speak only for myself here) I certainly seem to be needing a monthly reminder that to focus on pure science as a journalist is not in any way an abdication of one’s responsibilities as a citizen of India. The more forceful the reminder – i.e. the stronger the argument made – the longer it lingers in memory, in my consciousness, but these days its lifespan seems limited to 30 days at best.

Why is this reminder necessary? As people involved with an industry founded on the pursuit of truth, it’s important to know that what we’re doing (individually) is relevant and in the public interest. This compulsion frequently, and easily, supersedes personal interest.

This morning, I wanted to write about dynamic equilibrium in a droplet of liquid gallium trapped on a positively charged graphite ring. I thought it was cool – but there was the overwhelming sense that I could be spending my time and words better. And if you read the newspaper every day, you know better is applied science, science policy, administration, women in STEM, higher ed, public/private GERD, research misconduct, faculty hiring, IoEs, etc.

It’s very difficult to hold in your mind the importance of being interested in and even focusing on fundamental research when there is very little, if any, public dialogue or even public interest in/on it.

If you broach it, there will be zero immediate validation. It will always be contested, by the people and many scientists alike. A debate like this may be good in the bigger scheme of things – but in the absence of any sort of go-to resource to top up your conviction with on this line of argument, support for non-applied science remains islanded, devoid of opportunities for consensus. In other words, there is NIL institutional motivation to writing about non-applied scientific research.

I’ve personally grown tired of resorting to complex arguments about research always paying off in the long run to convince people that it’s important. History and economics together make nuanced suggestions about the “right” course of action but their careful study is like the climate, whereas I’m talking about the weather here.

One kind of argument that works with Left liberals who say “we have finite resources and we should put them to best possible use” is to offend their intellectual desire to negate the Modi govt’s policies. So I reply: “we have finite resources because the govt isn’t investing enough, and you choosing to ‘spend it wisely’ is no different from buckling under pressure, preparing to legitimise govt underspending by letting it affect your actions”.

Obviously this isn’t an objectively good argument because it only works when the govt and one particular political class is vehemently at odds. Instead, what we need is an ‘all-weather’ argument that works irrespective of one’s moralities. In my (new) case, that argument is “BECAUSE IT’S FUCKING COOL!”

It’s clearly not the best argument, it’s not even independent of my morals, etc., but it’s the argument that I need to just work. And by all means it should because what’s life without “wow”? I also realise my privilege here in that I’m a full-time science journalist with incredible freedom about what I write on. But somehow this acknowledgment feels similar to expecting one to thank nutjobs for not lynching you to death.

Then again, I’m also uncomfortable with being given a responsibility to make people go “wow” all the time. I would edit the mandate to say – as @anilananth said – “You’re on science’s side”, and add “while ‘wow’ is good, it’s the road to ‘wow’ that’s really cool.”

I hope you’ll quickly see a meta-problem here. If you ask any journalist as to why covering politics is important, the minimum viable answer is “Because.” Ask them why writing about ‘heartbeats’ in gallium is important, and it’s not just “Because.” It’s always something longer, and deliverable in full only to someone who already professes interest and has time. So in other words, I need a reason to write about non-applied science whose labour cost-of-rationalisation is comparable to that for politics or business journalism.

Straphangers on a train looking intently into their phones. Credit: Hugh Han/Unsplash

The evolution of doubt

On Twitter today, @thattai published a short thread about how framing the ‘debate’ about cellphone radiation harming biological tissue between ionising and non-ionising radiation is not a good idea because even non-ionising radiation (called so for its inability to strip electrons from atoms) can precipitate biochemical effects by interacting with energy reservoirs in the body.

Although he tried to be extra-careful and repeated that he’s said in his thread that studies thus far have been inconclusive, his last tweet advised people against sleeping with their phones close to their heads. At this point, @avinashtn intervened with interesting consequence. @avinashtn said that until scientists were able to come to a definitive (in a relative sense of the term) conclusion, guiding lay people towards a certain course of action was very risky – especially, in his view, if the course would cause them to fear cellphone radiation.

This is obviously a legitimate concern and a major part of the modern scientific zeitgeist: even when scientists have been able to reach consensus over the safety of X or Y product, people have feared that product because they have effectively been taught to do so. Examples include GMO, vaccines and bisphenol. The opposite is also true, such as with (some instances of) climate change communication, fats and – what has been my go-to case study thus far – cellphone radiation. That bad news spreads faster on the social media doesn’t help.

However, in a time when entire sections of scientific inquiry are under scrutiny for their empirical methods and conclusions, scepticism threatens to transform into its more villainous avatar: cynicism. While constantly questioning whether X or Y is no longer safe or unsafe is what will enable us to keep up with the times, we will allow doubts to consume us instead of the firmer footing of belief (and faith) if scepticism becomes cynicism.

In other words, by advocating wariness – as @thattai wishes to do with cellphone radiation – or non-wariness – as @avinashtn wishes to do – we seem to be at a crossroads that will determine the level of public trust in science, particularly in the time of the replication crisis but also (rather more importantly) lesser research funding, weaker public institutions and diminishing instruments to ensure public accountability.

Extrapolating further, it will be interesting to explore how the rise of nationalism around the world has transformed the place of doubt in our daily lives. And even further: to ask what history can teach us about the place of inconclusiveness in society such that we can moderate its place in the public psyche and increase trust in science.

For now, I stand against being wary of cellphone radiation, but I hope a broader view of science and scepticism over space and time can provide a more substantial – and unavoidably nuanced – answer about what would be the better position to take.

Featured image: Straphangers on a train looking intently into their phones. Credit: Hugh Han/Unsplash.