Ergodicity is the condition wherein a sample is representative of the whole vis-a-vis some statistical parameter. An ergodic system is one that visits all possible states of its existence as it evolves. Axiomatically, a non-ergodic system is one that does not. Stuart A. Kauffman, a scientist at the University of Calgary, wrote on Edge a year ago:

… the evolution of life in our biosphere is profoundly “non-ergodic” and historical. The universe will not create all possible life forms. This, together with heritable variation, is the substantial basis for Darwin, without yet specifying the means of heritable variation, whose basis Darwin did not know.

This is a very elegant description of history that employs a dynamism one commonly encounters in physics and the language of physics. If the past encapsulated everything that could every happen, it would be an uninteresting object of study because its peculiarities would all cancel out, leaving a statistical flatland in its wake. Instead, if the past contained only a specific set of events connected to each other in unique ways – i.e. exhibiting a distinctly uncommon variation – then it becomes worthy of study, as to why it is what it is and not something else. As Kauffman says, “Non-ergodicity gives us history.”

Though today I know that the concept is called ‘non-ergodicity’, I encountered its truth in a different context many years ago: when I had written an article that appeared in Quartz about how Venus could harbour life and that that should encourage us to look for life on Titan as well. I had quoted the following lines from a 2004 paper to strengthen my point:

The universe of chemical possibilities is huge. For example, the number of different proteins 100 amino acids long, built from combinations of the natural 20 amino acids, is larger than the number of atoms in the cosmos. Life on Earth certainly did not have time to sample all possible sequences to find the best. What exists in modern Terran life must therefore reflect some contingencies, chance events in history that led to one choice over another, whether or not the choice was optimal.

Somehow, and fortunately, these lines have stayed with me to this day four years on, and I hope and believe they will for longer. They present a simple message whose humility seems only to grow with time. They suggest that even life on Earth may not be the best (e.g. most efficient) it can be after billions of years of evolution. Imagine the number of evolutionary states that the whole universe has available to sample – the staggeringly large product of all the biospheres on all the planets in all the time…

The search for a ‘perfect lifeform’ is not a useful way to qualify humankind’s quest. Against such cosmic non-ergodicity, every single alien species we discover could, and likely will, stand for its own world of contingencies just as peoples of different cultures on Earth do. Perhaps then our xenophobia will finally become meaningless.

Advertisements

An article in KurzweilAI begins,

Plants could soon provide our electricity.

Why would anyone take this seriously? More than excitement, this line rouses a discerning reader to suspicion. It is bound to be centred on the word “soon”, implying in the near-future, imminently. You’re not sure which timescales people are thinking on but I’m sure we can agree 10 years sounds reasonable here. Will plants power your home in 10 years? Heck, in 50 years? It is stupendously unlikely. The suggestion itself – as embodied in that line – is disingenuous because it 1) overestimates feasibility at scale and 2) underestimates the amount of conviction, work and coordination it will take to dislodge the fossil-fuel, nuclear and renewable energy industries.

Indeed, the line that “plants could soon provide our electricity” begins to make sense only when its words are assessed individually instead of being beheld with the seductive possibilities the whole sentence offers. Could? Of course, they already do through the technology described in the article, called Plant-e. Plants? I don’t see why not; they are batteries of sorts, too. Provide? Plants are terrestrial, ubiquitous, very accessible, well understood and seldom dangerous. Our? Who else’s is it, eh. Electricity? Again, Plant-e has demonstrated this already, in the Netherlands, where it was pioneered. But cognise the sentence as a whole and you’re left with gibberish.

The article then claims:

An experimental 15 square meter model can produce enough energy to power a computer notebook. Plant-e is working on a system for large scale electricity production in existing green areas like wetlands and rice paddy fields. … “On a bigger scale it’s possible to produce rice and electricity at the same time, and in that way combine food and energy production.”

The emphasised bit (my doing) sounds off: it implies a couple dozen kilowatt at best, whereas the article’s last line says, “In the future, bio-electricity from plants could produce as much as 3.2 watts per square meter of plant growth.” Either way, a solar panel with a tenth of the surface area produces about 250 W (comparable to the first claim and improving, 10x better than the second claim). People around the world are already concerned that the world may not have enough nickel, cadmium and lithium to build the batteries to store this energy and may not have enough land to build all the solar cells necessary to “provide our electricity”.

In this scenario, why should anyone give a fuck about Plant-e as an alternative worth one’s time? It is interesting and exciting that scientists were able to create this technology but its billing as a reasonable substitute for the commonly known sources of energy, and “soon”, suggests that this is certainly hype, and that the people behind this announcement seem to be okay with disguising an elitist solution as a sustainable one.

Second, said billing also suggests that there is less certainly – but plausibly – a misguided, white-skinned belief at work here, that, notwithstanding details about intraday variability of power generation, soil conditions and such, agriculture and power consumption in the Netherlands are both similar to those elsewhere in the world. But the social, economic and technological gap between these endeavours as they happen in Northwest Europe and Southeast Asia is so large as to suggest the article’s authors have no clue about the socioeconomics of electric power or are at ease with the wilful disregard of it.

Announcements like this don’t harm anyone but they certainly offend the sensibilities of those forced to grow, grow, grow while on the brink of the worst of climate change. It is crucial that we keep innovating, finding new, better, more considerate ways of surviving impending disasters as well as reducing our deleterious footprint on this planet. Let us do this without suggesting that a nascent, untested (at scale) and currently infeasible technology may provide a crucial part of the answer where numerous other governments have failed.

Through this exercise, let us also awaken our minds to a new form of discrimination in the Anthropocene epoch – lazy, short-sighted, selfish thinking – and call it out.

In response to my Twitter thread against Tom Sheldon’s anti-preprints article in Nature, I received more responses in support of Sheldon’s view than I expected. So I wrote an extended takedown for my blog and, of course, The Wire, pasted below.


In 1969, Franz J. Ingelfinger articulated a now-famous rule named after him in an attempt to keep the New England Journal of Medicine (NEJM), which he edited at the time, in a position to give its readers original and fully vetted research. Ingelfinger stated that the NEJM wouldn’t consider publishing a paper if it had already been publicised before submission or had been submitted to another journal at the same time. The Ingelfinger rule symbolised a journal’s attempt to recognise its true purpose and reorganise its way of functioning to stay true to it.

Would we say this is true of all scientific journals? In fact, what is a scientific journal’s actual purpose? First, it performs peer review, by getting the submissions it receives scrutinised by a panel of independent experts to determine the study’s veracity. Second, the journal publicises research. Third, it creates and maintains a record of the section of the scientific literature it is responsible for. In the historical context, these three functions have been dominant. In a more modern, economic and functional sense, scientific journals are also tasked with making profits, improving their impact metrics and making research more accessible.

As it happens, peer review is no longer the exclusive domain of the journal – nor is it considered to be an infallible institution. Second, journals still play an important part in publicising research, especially via embargoes that create hype, pointing journalists towards papers that they might otherwise not have noticed, as well as preparing and distributing press releases, multimedia assets, etc. Of course, there are still some flaws here. And third, the final responsibility of maintaining the scientific record continues to belong to the journal.

Too much breathing space

Pressures on the first two fronts are forcing journals to stay relevant in newer ways. A big source of such pressure is the availability of preprints – i.e. manuscripts of papers made available by their authors in the public domain before they have been peer-reviewed.

Preprint repositories like arXiv and biorXiv have risen in prominence over the last few years, especially the former. They are run by groups of scientists – like volunteers pruning the garden of Wikipedia – that ensure the formatting and publishing requirements are met, remove questionable manuscripts and generally – as they say – keep things going. Scientific journals typically justify their access cost by claiming that they have to spend it on peer review and printing. Preprints evade this problem because they are free to access online and are not peer-reviewed the way ‘published’ papers are. In turn, the reader who wishes to read the preprint must bear this caveat in mind.

This week, the journal Nature published a (non-scientific) article headlined ‘Preprints could promote confusion and distortion’. Authored by Tom Sheldon, a senior press manager at the Science Media Centre, London, it advanced a strange idea: that bad science was published in the press because journalists did not have “enough time and breathing space” to evaluate it. While Sheldon then urges scientists “to be part of these debates – with their eyes open to how the media works” – the more forceful language elsewhere in the article suggest that preprints should go and that that will fix the problem.

There are numerous questionable judgments embedded here. Principal among them is that embargoes are the best way to do it – and this may seem obvious from the journal’s point of view because an embargo functions like a pair of blinders, keeping a journalist focused on a journal-approved story, and reminding her that she must contact a scientist because a deadline is approaching after which all publications will ‘break’ the story. Of course, embargoes aren’t the norm; the Ingelfinger rule says that the journal will be responsible for ensuring that whatever it publishes is good-to-go.

But with a preprint, there are no deadlines; there are no pointers about which papers are good or bad; and there is no list of people to contact. The journal fears that the journalist will fumble, be overcome with agoraphobia and, as Sheldon writes, “rushing to be the first to do so … end up misleading millions, whether or not that was the intention of the authors.”

It is obvious that the Ingelfinger + embargo way of covering research will produce more legitimate reportage more often – but these rules are not the reasons why the papers are reported the way they are.

High-profile cases in which peer-review failed to disqualify bad and/or wrong papers and papers’ results being included in the scientific canon only for replication studies to completely overturn them later are proof that journals, together with the publishing culture in which they are embedded, aren’t exactly perfect.

Some scientists have even argued that embargoes should be done away with because the hype they create often misrepresents the modesty of the underlying science. Others focused their attention on universities, which often feed on the hype created by embargoes to pump out press releases making far-fetched claims about what the scientists on their payrolls have accomplished.

In turn, journalists have been finding that good journalism is the outcome only when good journalism is also the process. Give a good journalist a preprint to work with and the same level of due diligence will be applied. Plonk a bad journalist in front of an embargoed news releases and a preprint, and you will only get shoddy work both times. It is not as if journalists suspend their fact-checking process when they work with embargoed papers curated by journals and reinstate it when dealing with preprints. A publication that covers science well will quite likely cover other subjects with the same regard and sensitivity not because of the Ingelfinger rule but because of the overall newsroom culture.

Last line of defence

Moreover, an infuriating presumption in the Nature article is that the preprint flows as if by magic from the repository where it was uploaded into the hands of the “millions” now misled by it. Indeed, though it is annoying that the phrasing makes no room for a functional journalist who can step in, write about the paper and arrange for it to be publicised, it is simply frustrating that the journalistic establishment remains invisible to Sheldon’s eye even when we’re talking about an extra-journal agent messing up along the way.

It is the product of this invisibility – rather, a choice to not acknowledge evident work – that suggests to the scientific journal that it must take responsibility for ensuring all that it publishes is good and right. As a pathway to accruing more relevance, this can only be good for the journal; however, it is also a way to accrue more power, so it must not be allowed to happen. This is ultimately why taking preprints away makes no sense: journals must share knowledge, not withhold it.

By taking preprints away from journalists, Sheldon proposes to force us to subsist on journal-fed knowledge – knowledge that is otherwise impossible to access for millions in the developing world, knowledge that is carefully curated according to the journal’s interests and, most of all, pushing the idea that the journal knows what is best for us.

But journals are not the last line of defence, even though they would like to think so; journalists are. That is how journalism is structured, how it functions, how it is managed as a sector, how it is perceived as an industry. If you take control away from journalists to move beyond papers approved by a journal, we lose our ability to question the journal itself.

The only portion of the Nature article that elicits a real need for concern is when Sheldon refers to embargoes as a means of safeguarding novelty for news publishers. He quotes Tom Whipple, science editor of The Times, saying that it is impossible to compete with the BBC because the BBC’s army of reporters are able to pick up on news faster. The alternative, he implies, is to preserve embargoes because they keep the results of a paper new until a given deadline – letting journalists from publishers small and large cover it at the same time.

In fact, if it is reform that we are desperate for, this is the first of three courses of action: to keep removing the barriers instead of making access more equitable. The second is to fix university press releases. The third is to stop interrogating preprints and start questioning publishing practices. For example, is it not curious that both Nature and NEJM, as well as many other ‘prestigious’ titles, rank almost as highly on the impact index as they do on the retraction index?

Update: The following correction was made to the Nature article on July 25 (h/t @kikisandberg). I guess that’s that now.

Screen Shot 2018-07-30 at 07.16.31

Anyone who writes about physics research must have a part of their headspace currently taken up by assessing a new and potentially groundbreaking claim out of the IISc: the discovery of superconductivity at ambient pressure and temperature in a silver nanostructure embedded in a matrix of gold. Although The Hindu has already reported it, I suspect there’s more to be said about the study than is visible at first glance. I hope peer review will help the dust settle a little, but we all know post-publication peer-review is where the real action is. Until then, other physics news beckons…


Unlike room-temperature superconductivity, odds are you haven’t heard of ptychography. In the field of microscopy, ptychography is a solution to the so-called phase problem. When you take a selfie, the photographic sensor in your phone captures the intensity of light waves scattering off your face to produce a picture. In more sophisticated experiments, however, information about the intensity of light alone doesn’t suffice.

This is because light waves have another property called phase. When light scatters off your face, the phase change doesn’t embody any useful information about the selfie you’re taking. But if physicists are studying, say, atoms, then the phase change can tell them about the distribution of electrons around the nucleus. The phase problem comes to life when microscopes can’t capture phase information, leaving scientists with only a part of the picture.

Sadly, this constraint only exacerbates electron microscopy’s woes. Scientists in various fields use electron microscopy to elucidate structures of matter that are much smaller than the distances across which photons can act as probes. Thanks to their shorter wavelength, electrons are used to study the structure of proteins, the arrangement of atoms in solids and even aid in the construction of complex nanostructure materials.

However, the technique’s usefulness in studying individual atoms is limited by how well scientists are able to focus the electron beams onto their samples. To achieve atomic-scale resolution, scientists use a technique called high-angle annular dark-field imaging (ADF), wherein the electrons are scattered at high angles off the sample to produce an incoherent image.

For ADF to work better, the electrons need to possess more momentum, so scientists typically use sophisticated lenses to adjust the electron beam while they boost the signal strength to take stronger readings. This is not desirable. If the object of their study is fragile, the stronger beam can partially or fully disintegrate it. Thus, the high-angle ADF resolution for scanning transmission electron microscopy has been chained to the 0.05 nm mark, and going up to 0.1 nm for more fragile structures.

Ptychography solved the phase problem for X-ray crystallography in 1969. The underlying technique is simple. When X-rays interact with a sample under study and return to a detector, the detector produces a diffraction pattern that contains information about the sample’s shape.

In ptychography, scientists iteratively record the diffraction pattern obtained from different angles by changing the position of the illuminating beam, allowing them to compute the phase of returning X-rays relative to each other. By repeating this process multiple times from various directions, scientists will have data about the sample that they can reverse-process to extract the phase information.

Ptychography couldn’t be brought to electron microscopy straightaway, however, because of a limitation inherent to the method. For it to work, the microscope has to measure the diffraction intensity values with equal precision in all the required directions. “However, as electron scattering form factors have a very strong angular dependence, the signal falls rapidly with scattering angle, requiring a detector with high dynamic range and sensitivity to exploit this information” (source).

In short, electron microscopy couldn’t work with ptychography because these detectors didn’t exist. As an interim solution, in 2004, researchers from the University of Sheffield developed an algorithm to fill in the gaps in the data.

Then, on July 18, researchers from the US reported that they had built just such a detector (preprint), which they called an “electron microscope pixel array detector” (EMPAD), and claimed that they had used it to retrieve images of a layer of molybdenum disulphide with a resolution of 0.4 Å. One image from their paper is particularly stunning: it shows the level of improvement ptychography brings to the table, leaving the previous “state of the art” resolution of 1 Å achieved by ADF in the dust.

Source: https://arxiv.org/pdf/1801.04630.pdf
Source: https://arxiv.org/pdf/1801.04630.pdf

The novelty here isn’t that the detector is finally among us. The same research group (+ some others) had announced that it had built the EMPAD in 2015, and claimed then that it could be used for better electron ptychography. What’s new now is that the group has demonstrated it.

a) Schematic of STEM imaging using the EMPAD. b) Schematic of the EMPAD physical structure. The pixelated sensor (blue) is bump-bonded pixel-by-pixel to the underlying signal processing chip (pink). Source: https://arxiv.org/pdf/1511.03539.pdf
a) Schematic of STEM imaging using the EMPAD. b) Schematic of the EMPAD physical structure. The pixelated sensor (blue) is bump-bonded pixel-by-pixel to the underlying signal processing chip (pink). Source: https://arxiv.org/pdf/1511.03539.pdf

According to their 2015 paper, the device

consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning.

For the molybdenum disulphide imaging test, the EMPAD had 128 x 128 pixels, operated in the 20-300 keV energy range, possessed a dynamic range of 1,000,000-to-1 and with a readout speed of 0.86 ms/frame. The scientists also modified the ptychographic reconstruction algorithm to work better with the detector.

I am thoroughly dispirited. I had wanted to write today about how it is fascinating that we have validated Einstein’s theory of general relativity for the first time in an extreme environment: in the neighbourhood of a black hole. The test involved the detection of an effect called the gravitational redshift, whereby light that is moving from a region of higher to lower gravitational potential appears redshifted. In other words, light seen moving from an area of stronger gravitational field to an area of weaker gravitational field appears to be redder than it actually is, if the observer is sufficiently far from the source of this field. The observation of this redshift is doubly fascinating because it is also an observation of time dilation in action.

The European Southern Observatory’s Very Large Telescope (VLT) took the initiative 26 years to make this check; it was completed and announced yesterday, July 25. The source of the gravitational potential was the black hole at the Milky Way’s centre, called Sagittarius A*, and the source of starlight, a stellar body known only as S2. Triply fascinating is the fact that the VLT observed S2 swinging by Sgr A* at a searing 25 million km/hr. Phew!

But through this all, I am distressed because of an article I spotted a few minutes ago on NDTV’s website, about how we must not eat certain foods during a lunar eclipse – given the one set to happen tomorrow – because they could harm us. I thought we had been able to go a full day without a mainstream publication spreading pseudoscientific information about the eclipse, but here we are. I weep for many reasons; right now, I weep most of all not for the multitude of quacks we inhabit this country with but for Yash Pal. And I wish that, like S2, I can escape this nonsense at 3% of the speed of light when it becomes too much.

Tom Sheldon, a senior press manager at the Science Media Centre (SMC), London, had an interesting proposition – at least at first – published in Nature on July 24. The journal’s Twitter handle had tweeted it thus: “Do you think publishing on preprint servers is good or bad for science?” Though the question immediately set off alarm bells, I thought that like many news reports and magazine features these days, perhaps the tweet was desperate to get a reader interested in what was going to be a more nuanced argument. I was wrong.

Though the following line comes much farther down the article, it deftly encompasses its central – and misguided – animus: “How can we have preprints and support good journalism?” There is no contradiction here but, as we’ll see, there is a strong reflection of Nature‘s own tendency to publish ideas for their glamour.

As much as I’d welcome changes to the preprint ecology that would make it always easier for journalists to report on a paper, it’s not the preprint’s fault if a story is found to be wrong or misleading. Such a thing could only be because the journalist hasn’t done their due diligence, especially in mid-2018, when the pursuit of truth(s) has been gripped by post-truth, fake news, false balance, discussions on the “view from nowhere”, etc. For example, Sheldon writes,

Imagine early findings that seem to show climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.

A lazy journalist will be lazy. A bad journalist will misrepresent a paper if they have to. A good journalist will stop to check, especially if they are aware that climate change carries a 95% consensus among scientists and that vaccines have often been misreported on in the recent past. Such awareness (what many in India would call ‘general knowledge’) can go a long way. It is why journalists are expected to consume the news as much as they would like to be involved in producing it. And preprints are not going to improve or worsen this situation.

Sheldon continues,

I’ll admit that we do not yet have examples of harm from such stories, but this is probably because — at the moment — only a tiny fraction of preprints cover health-related or controversial fields.

I’m not sure how true this is. More importantly, journalists constantly misrepresent even peer-reviewed papers, on health or other subjects. Quoting @avinashtn: “Andrew Wakefield‘s MMR vaccine ‘study’ was peer-reviewed, as was arsenic-based DNA. Nature is being disingenuous because preprints will hurt them.” Tell you what, I buy it because it is eminently possible.

It is also funny here that, in its pursuit to be seen as both selective and recognised as an identifier of paradigm-defining research, Nature often publishes research papers that are more spectacular than accurately representative of science as it is. To quote Björn Brembs, a professor of neurogenetics at the Universität Regensburg, Bavaria, and an important voice in the global open access movement, “Nature is among the group of journals which stand out as publishing the least reliable science” – elaborated here and here.

I also appreciate Sheldon’s reaching out (as in the excerpt below) to journalists but I don’t get the part where it is implied that embargoes give journalists more time to prepare for a story than do preprints. How is that? And this is a problem only if someone is restricting access to or preventing journalists from soliciting and publishing independent comment, and which Sheldon admits to in a different context in his article. (Ivan Oransky writes in a just-published Embargo Watch post that this is called a “close-hold embargo”, used to encourage stenography.)

It is not enough to shrug and blame journalists, and it is unhelpful to dismiss those journalists who can accurately convey complex science to a mass audience. Scientists need to be part of these debates — with their eyes open to how the media works. Journalists do include appropriate caveats or even decide not to run a story when conclusions are tentative, but that happens only because they have been given enough time and breathing space to assess it. If the scientific community isn’t careful, preprints could take that resource away.

It would be best for everyone involved (although not the journals) if we set aside preprints and fixed university press releases instead. Peer-review doesn’t always point to good science and journals aren’t the only ones to perform it. Other scientists do it in the open through post-publication review and journalists do it by enlisting independent scrutiny.

If there is any concern that preprints are less legitimate than papers published by scientific journals (post-peer-review) are: I think we can all agree that, most often, peer-review isn’t the first time a scientist shows their paper to an independent expert, and that ‘updating’ preprints – at least on arXiv – is something scientists wants to avoid; it may not be as bad as issuing a correction to or retracting a published paper but it carries its own implications. As a result, it should be okay for scientists to issue press releases, or simply just notifications, along with their preprints, and for organisations like the SMC, where Sheldon works, to make it easier for journalists to reach out to independent experts.

The part I found the most convincing about Sheldon’s argument is this:

Another risk is the inverse — and this one could matter more to some researchers. Under the preprint system, one intrepid journalist trawling the servers can break a story; by the time other reporters have noticed, it’s old news, and they can’t persuade their editors to publish.

There have been cases in which a preprint that garnered news stories got a second flurry of coverage when it was published in a journal. But generally, the rule is ‘it has to be new to be news’. Reacting to our blog, Tom Whipple, science editor of The Times in the United Kingdom, tweeted: “I’m not sure how to keep a newspaper in profitable existence that decides to give people news they’ve already read on the BBC.”

… but I have questions still. Dear editors, who are you competing with and why? Is the BBC getting everything right? Is the BBC even covering everything from all angles? I feel like there’s some context missing here. A cursory search on Google for Whipple’s comment only turned up Sheldon’s article in Nature. I doubt Whipple’s entire comment was that single line because, off the top of my head, it anticipates one of only two ways ahead: scale or go niche. The latter is much more effective as a strategy to take on the BBC with.

Edit, July 27: Whipple’s Twitter thread about BBC and embargoes in general is here. There’s also a Tom Chivers tweet in there that I largely agree with – as does Whipple – and which makes a point somewhat similar to mine, which is that even if journalists are finding it harder to compete with each other, taking away preprints isn’t going to fix anything. //

The bigger point is to not throw the baby out with the bathwater – to not push an issue that has demonstrably minimal late-downstream effects back upstream, where the given solution (of preprints) is working perfectly fine. But if you’re considering taking away preprints because incompetent journalists are screwing up, the problems as you’re perceiving them are going to get a lot worse – in ways too numerous, and too obvious, to delineate here. More access is always better.

I went to the most terrifying place in the world today: the dental clinic. I’d woken up this morning with a sharp pain under my right lower jaw and, soon enough, I realised it was time to get rid of the wisdom tooth – a divorce I’d been putting off for a few months for fear of the pain. I’d had five teeth extracted as a kid about 15 years ago, and the last of these teeth had been plucked out shortly after the local anaesthetic had stopped working. The trauma of that incident has stayed with me, and resurfaced in full glory this morning.

I made an appointment via Practo at a clinic nearby – the reviews seemed nice – and got there at 12 pm. I met with Dr B, who seemed really nice and didn’t offer any gratuitous advice when I told her I smoke. I liked her immediately. We took a quick X-ray and I was told that my wisdom tooth on the right had to go, and right away because an infection had developed around it. I told Dr B about my traumatic experience having teeth pulled. She promised me she’d keep it completely painless. And she did.

But where she failed – and where most doctors I think would fail – is in making her patient feel less dehumanised. As soon as the X-ray was taken, she began to correspond with another doctor in the room in hushed tones about what was going on with my tooth. Their dialogue was speckled with strange terms, and I couldn’t tell the difference between when they were talking about my teeth and when they were talking about the shape of my jaw. But I surmised it wasn’t looking good.

I had to interject repeatedly to ask what the X-ray was showing. If I didn’t ask, they wouldn’t bother. Even when the extraction procedure was about to begin, I was asked to recline, various implements were thrust into my mouth and a nurse stood on standby. “If you want me to stop for any reason, just raise your hand,” Dr B said. Just as she was about to poke a pointy thing into my mouth, I raised my hand. She was surprised. I asked what it is that they were going to do. She answered, and then it began.

I learnt later that my tooth’s roots were strong, so the damned thing had to be broken up first and then removed piece by piece. The procedure 15 years ago had involved just one implement – a tool I’ve always called the Motherfucker. Dr U had plunged with it into my mouth, used it to wrangle with the misbehaving tooth and, after a few seconds, pull it out. This time, with Dr B, the motherfucker only showed up 45 minutes after we’d started, and in two avatars. Motherfucker I was the cow horns #23 forceps and Motherfucker II was somewhat like a lower anterior forceps.

We had started off at 12.10 pm with two syringes of a local anaesthetic, topping it off an hour later with a third. I was told the one side of my mouth would go numb. “You won’t feel any pain, you will just feel the pressure of my hands as I’m working,” Dr B said. But somewhere after the third syringe, I lost any ability to tell apart pain and pressure. I was lost in my head, flipping through scenes from old Tamil movies looking for anything with a dentist in it. Nothing. Annoyingly enough, the scene that showed up most vividly, and repeatedly, in my mind was Andy Serkis singing ‘Don’t hurt me’ to Martin Freeman, a scene from Black Panther.

Around 1.15 pm, Dr B stepped away from my face, shaking her head in exasperation. Her colleague stepped closer, asking what had happened, while the nurse – who was also the cleaning lady at the clinic – stepped closer to peer into my mouth, a big smile on her face. Dr B said then, “This is a bone-cutting case.”

What.

The fuck.

Did you just say?

As it is, I have very little idea about whatever is going on. The grotesque zircon-tipped tools passing in and out of my mouth aren’t helping me calm down. (One of them, called a Couplands elevator, is what I’m going to call the Little Motherfucker.) The doctor in general doesn’t feel compelled to tell her patient what it is that she’s doing, leaving me to guess for myself. And the one thing that’s said out loud, sans any prompt, is that I’m a “bone-cutting case”. Wonderful. Obviously, right then, I couldn’t stop thinking about The Bone Collector.

After a new set of tools had been assembled and Dr B bent down to inspect the tooth, I raised my hand. She looked at me, I smiled, she smiled back, and explained: “There’s a hard layer of bone around the tooth that I’ll have to cut before I can extract the tooth.” I nodded in satisfaction. The nurse swiftly introduced a suction pipe and began to drip saline solution from a needle onto the tooth, Dr B planted wads of cotton in my cheek and placed a bite my teeth on the left, and we got started again – this time, with a drill called Bone Cutter (by everyone). As it raged against my tooth with noises like R2D2 was being tortured, it felt like the industrial revolution was happening inside my skull, replete with the Kafkaesque style of oppression.

At the end of two hours, my tooth had been chipped into four pieces, each then scraped-and-plucked out in a bloody mess. I don’t know if I was billed for the gloves but I knew they had been changed thrice. Dr B’s hands were trembling as she sutured the wound. Once she was done, her colleague patted her on the back with a triumphant smile. “Well done,” she said, “you handled the case very well.” The case wasn’t pleased to hear this but was glad that it was over all the same.

Doctor-to-patient communication plays an important role in reminding physicians that their wards are people just like themselves. When it isn’t there, it signals that the doctor doesn’t think the patient needs to know. This in turn makes it harder for people to make decisions, and more generally retain their sense of agency because they don’t have the information necessary to act rationally (from the doctor’s POV). Another way this problem reared its head today was in the form of pain. Most of the time, Dr B would heed my raised hand and pause for a minute or so, but every now and then, when she was nearing the end of a step of action, my raised hand would only draw a “Just hang in there”. So the trauma from 15 years ago hangs in there, too.

The silver lining is that I will likely not have to undergo this hell again.

“Nature is Lovecraftian” … is it? The literature of H.P. Lovecraft in freaky, at odds with the more conventional, less morally degenerate canon of English literary fiction irrespective of the period from which the latter is selected. To say “nature is Lovecraftian” is to extend to zoologia the out-of-place characteristic we associate with Lovecraft’s characters and their plots. This is not fair. Many would think to say “nature is Lovecraftian” is to profoundly underestimate what an animal is capable of because underestimation is the source of Lovecraft’s surprise, and they would miss the point. Nature surprises us not because we usually expect very little from it but because we continually choose to obsess over the “more” normal, whatever that is, and sideline the existence of the “less”. It is not the surprise of underestimation – which fiction has always been better at mustering – but the surprise of ignorance.

Of course, given two sets N and A (for normal and abnormal behaviour), N will always be larger than A by definition. That is also how we would approach surprise: to find a member of A in N. One could argue that nature is indeed Lovecraftian because it also abides by the rule that N > A. However, I would disagree in that we have no means to prove this because, beyond statistical considerations and even subjectivity, nature is herself the architect of the human sense of beauty. We can only find in nature that which we are already looking for; our composition of the sets N and A will be guided by nature’s hand. Instead, it might be more fruitful to escape the biases of the human condition – perhaps by taking recourse through the scientific method – and arrive at the inevitable conclusion that animals do what they have to do to get their meal, and that we frequently only encounter and remember those that go about their lives in routines that we approve of.

It is no mistake whatsoever to attempt to literalise nature in terms of anthropocentric qualities, but it might be one to liken our relationship with nature to the human psyche’s definitive relationship with Lovecraft’s stories. Perhaps… is nature Doylean?

I watched Black Panther again today. Two things came to mind.

First: When by the end of the film T’Challa and Wakanda realise that they can’t keep their technology a secret anymore, it is – among many things – an act of taking charge of their nation’s narrative. By doing so, T’Challa and his advisers ensure that others may not tell Wakanda’s story in a way the state does not wish to be told. This is valuable advice, especially for the Indian Space Research Organisation (ISRO). This organisation’s exploits are blown out of proportion more often than any other governmental institution in India, except perhaps the Army’s. However, ISRO’s dysfunctional public outreach enterprise has never raised a finger against those who would misrepresent its activities or intentions. It must do so, and take charge of the narrative so that those less informed don’t.

Second: In the first half of the film, Erik Stevens (later, N’Jadaka) casually reveals that he has spiked the coffee being sipped by the curator of a museum in London. In most English and Tamil films till date, the nitty-gritty of heists are spelled out to the audience by featuring the scenes in which each step of the heist was performed. However, Black Panther doesn’t bother, probably because it is not a heist film but largely because it could bank on its audience to piece together what might have happened, and for which it could thank all the heist films released thus far (esp. the Ocean’s trilogy). From my POV, the film used ‘tell’, not ‘show’ – which is, coming from a journalist, a bad way to write a story – to good effect. The first Tamil film I saw that was similarly innocuous about the details of its caper was Aayirathil Oruvan (‘One man in a thousand’), 2010.