Latest Posts

faceless doctor showing colorful syringes in pink studio

Exporting risk

I’m torn between admitting that our cynicism about scientists’ solutions for the pandemic is warranted and the palliative effects of reading this Reuters report about seemingly nothing more than the benevolence of richer nations not wasting their vaccine doses:

Apart from all the other transgressions – rather business as usual practices – that have transpired thus far, this is one more testimony to all those instances of insisting “we’re all in this together” being just platitudes uttered to move things along. And if it weren’t enough already that poorer nations must make do with the leftovers of their richer counterparts that ordered not as many doses as they needed but as many as would reassure their egos (a form of pseudoscience not new to the western world), the doses they’re going to give away have been rejected for fear of leading to rare but life-threatening blood clots. To end the pandemic, what kills you can be given away?

US experiments find hint of a break in the laws of physics

At 9 pm India time on April 7, physicists at an American research facility delivered a shot in the arm to efforts to find flaws in a powerful theory that explains how the building blocks of the universe work.

Physicists are looking for flaws in it because the theory doesn’t have answers to some questions – like “what is dark matter?”. They hope to find a crack or a hole that might reveal the presence of a deeper, more powerful theory of physics that can lay unsolved problems to rest.

The story begins in 2001, when physicists performing an experiment in Brookhaven National Lab, New York, found that fundamental particles called muons weren’t behaving the way they were supposed to in the presence of a magnetic field. This was called the g-2 anomaly (after a number called the gyromagnetic factor).

An incomplete model

Muons are subatomic and can’t be seen with the naked eye, so it could’ve been that the instruments the physicists were using to study the muons indirectly were glitching. Or it could’ve been that the physicists had made a mistake in their calculations. Or, finally, what the physicists thought they knew about the behaviour of muons in a magnetic field was wrong.

In most stories we hear about scientists, the first two possibilities are true more often: they didn’t do something right, so the results weren’t what they expected. But in this case, the physicists were hoping they were wrong. This unusual wish was the product of working with the Standard Model of particle physics.

According to physicist Paul Kyberd, the fundamental particles in the universe “are classified in the Standard Model of particle physics, which theorises how the basic building blocks of matter interact, governed by fundamental forces.” The Standard Model has successfully predicted the numerous properties and behaviours of these particles. However, it’s also been clearly wrong about some things. For example, Kyberd has written:

When we collide two fundamental particles together, a number of outcomes are possible. Our theory allows us to calculate the probability that any particular outcome can occur, but at energies beyond which we have so far achieved, it predicts that some of these outcomes occur with a probability of greater than 100% – clearly nonsense.

The Standard Model also can’t explain what dark matter is, what dark energy could be or if gravity has a corresponding fundamental particle. It predicted the existence of the Higgs boson but was off about the particle’s mass by a factor of 100 quadrillion.

All these issues together imply that the Standard Model is incomplete, that it could be just one piece of a much larger ‘super-theory’ that works with more particles and forces than we currently know. To look for these theories, physicists have taken two broad approaches: to look for something new, and to find a mistake with something old.

For the former, physicists use particle accelerators, colliders and sophisticated detectors to look for heavier particles thought to exist at higher energies, and whose discovery would prove the existence of a physics beyond the Standard Model. For the latter, physicists take some prediction the Standard Model has made with a great degree of accuracy and test it rigorously to see if it holds up. Studies of muons in a magnetic field are examples of this.

According to the Standard Model, a number associated with the way a muon swivels in a magnetic field is equal to 2 plus 0.00116591804 (with some give or take). This minuscule addition is the handiwork of fleeting quantum effects in the muon’s immediate neighbourhood, and which make it wobble. (For a glimpse of how hard these calculations can be, see this description.)

Fermilab result

In the early 2000s, the Brookhaven experiment measured the deviation to be slightly higher than the model’s prediction. Though it was small – off by about 0.00000000346 – the context made it a big deal. Scientists know that the Standard Model has a habit of being really right, so when it’s wrong, the wrongness becomes very important. And because we already know the model is wrong about other things, there’s a possibility that the two things could be linked. It’s a potential portal into ‘new physics’.

“It’s a very high-precision measurement – the value is unequivocal. But the Standard Model itself is unequivocal,” Thomas Kirk, an associate lab director at Brookhaven, had told Science in 2001. The disagreement between the values implied “that there must be physics beyond the Standard Model.”

This is why the results physicists announced today are important.

The Brookhaven experiment that ascertained the g-2 anomaly wasn’t sensitive enough to say with a meaningful amount of confidence that its measurement was really different from the Standard Model prediction, or if there could be a small overlap.

Science writer Brianna Barbu has likened the mystery to “a single hair found at a crime scene with DNA that didn’t seem to match anyone connected to the case. The question was – and still is – whether the presence of the hair is just a coincidence, or whether it is actually an important clue.”

So to go from ‘maybe’ to ‘definitely’, physicists shipped the 50-foot-wide, 15-tonne magnet that the Brookhaven facility used in its Muon g-2 experiment to Fermilab, the US’s premier high-energy physics research facility in Illinois, and built a more sensitive experiment there.

The new result is from tests at this facility: that the observation differs from the Standard Model’s predicted value by 0.00000000251 (give or take a bit).

The Fermilab results are expected to become a lot better in the coming years, but even now they represent an important contribution. The statistical significance of the Brookhaven result was just below the threshold at which scientists could claim evidence but the combined significance of the two results is well above.

Potential dampener

So for now, the g-2 anomaly seems to be real. It’s not easy to say if it will continue to be real as physicists further upgrade the Fermilab g-2’s performance.

In fact there appears to be another potential dampener on the horizon. An independent group of physicists has had a paper published today saying that the Fermilab g-2 result is actually in line with the Standard Model’s prediction and that there’s no deviation at all.

This group, called BMW, used a different way to calculate the Standard Model’s value of the number in question than the Fermilab folks did. Aida El-Khadra, a theoretical physicist at the University of Illinois, told Quanta that the Fermilab team had yet to check BMW’s approach, but if it was found to be valid, the team would “integrate it into its next assessment”.

The ‘Fermilab approach’ itself is something physicists have worked with for many decades, so it’s unlikely to be wrong. If the BMW approach checks out, it’s possible according to Quanta that just the fact that two approaches lead to different predictions of the number’s value is likely to be a new mystery.

But physicists are excited for now. “It’s almost the best possible case scenario for speculators like us,” Gordan Krnjaic, a theoretical physicist at Fermilab who wasn’t involved in the research, told Scientific American. “I’m thinking much more that it’s possibly new physics, and it has implications for future experiments and for possible connections to dark matter.”

The current result is also important because the other way to look for physics beyond the Standard Model – by looking for heavier or rarer particles – can be harder.

This isn’t simply a matter of building a larger particle collider, powering it up, smashing particles and looking for other particles in the debris. For one, there is a very large number of energy levels at which a particle might form. For another, there are thousands of other particle interactions happening at the same time, generating a tremendous amount of noise. So without knowing what to look for and where, a particle hunt can be like looking for a very small needle in a very large haystack.

The ‘what’ and ‘where’ instead come from different theories that physicists have worked out based on what we know already, and design experiments depending on which one they need to test.

Into the hospital

One popular theory is called supersymmetry: it predicts that every elementary particle in the Standard Model framework has a heavier partner particle, called a supersymmetric partner. It also predicts the energy ranges in which these particles might be found. The Large Hadron Collider (LHC) in CERN, near Geneva, was powerful enough to access some of these energies, so physicists used it and went looking last decade. They didn’t find anything.

A table showing searches for particles associated with different post-standard-model theories (orange labels on the left). The bars show the energy levels up to which the ATLAS detector at the Large Hadron Collider has not found the particles. Table: ATLAS Collaboration/CERN

Other groups of physicists have also tried to look for rarer particles: ones that occur at an accessible energy but only once in a very large number of collisions. The LHC is a machine at the energy frontier: it probes higher and higher energies. To look for extremely rare particles, physicists explore the intensity frontier – using machines specialised in generating collisions.

The third and last is the cosmic frontier, in which scientists look for unusual particles coming from outer space. For example, early last month, researchers reported that they had detected an energetic anti-neutrino (a kind of fundamental particle) coming from outside the Milky Way participating in a rare event that scientists predicted in 1959 would occur if the Standard Model is right. The discovery, in effect, further cemented the validity of the Standard Model and ruled out one potential avenue to find ‘new physics’.

This event also recalls an interesting difference between the 2001 and 2021 announcements. The late British scientist Francis J.M. Farley wrote in 2001, after the Brookhaven result:

… the new muon (g-2) result from Brookhaven cannot at present be explained by the established theory. A more accurate measurement … should be available by the end of the year. Meanwhile theorists are looking for flaws in the argument and more measurements … are underway. If all this fails, supersymmetry can explain the data, but we would need other experiments to show that the postulated particles can exist in the real world, as well as in the evanescent quantum soup around the muon.

Since then, the LHC and other physics experiments have sent supersymmetry ‘to the hospital’ on more than one occasion. If the anomaly continues to hold up, scientists will have to find other explanations. Or, if the anomaly whimpers out, like so many others of our time, we’ll just have to put up with the Standard Model.

Featured image: A storage-ring magnet at Fermilab whose geometry allows for a very uniform magnetic field to be established in the ring. Credit: Glukicov/Wikimedia Commons, CC BY-SA 4.0.

The Wire Science
April 8, 2021

black and yellow analog clock

13 years

I realised some time ago that I completed 13 years of blogging around January or March (archives on this blog go back to March 2012; the older posts are just awful to read today. The month depends on which post I consider to be my first.). Regardless of how bad my writing in this period has been, I consider the unlikely duration of this habit to be one of the few things that I can be, and enjoy being, unabashedly proud of. I’m grateful at this point for two particular groups of people: readers who email notes (of appreciation or criticism) in response to posts and reviewers who go through many of my posts before they’re published. Let me thank the latter by name: Dhiya, Thomas, Madhusudhan, Jahnavi, Nehmat and Shankar. Thomas in particular has been of tremendous help – an engaged interlocutor of the sort that’s hard to find on any day. Thank you all very much!

body of water during golden hour

On the NASEM report on solar geoengineering

A top scientific body in the US has asked the government to fund solar geoengineering research in a bid to help researchers and policymakers know the fullest extent of their options to help the US deal with climate change.

Solar geoengineering is a technique in which sunlight-reflecting aerosols are pumped into the air, to subtract the contribution of solar energy to Earth’s rapidly warming surface.

The technique is controversial because the resulting solar dimming is likely to affect ecosystems in a detrimental way and because, without the right policy safeguards, its use could allow polluting industries to continue polluting.

The US National Academies of Sciences, Engineering and Medicine (NASEM) released its report on March 25. It describes three solar geoengineering strategies: stratospheric aerosol injection (described above), marine cloud brightening and cirrus cloud thinning.

“Although scientific agencies in the US and abroad have funded solar-geoengineering research in the past, governments have shied away from launching formal programmes in the controversial field,” Nature News reported. In addition, “Previous recommendations on the subject by elite scientific panels in the US and abroad have gone largely unheeded” – including NASEM’s own 2015 recommendations.

To offset potential roadblocks, the new report requests the US government to setup a transparent research administration framework, including a code of conduct, an open registry of researchers’ proposals for studies and a fixed process by which the government will grant permits for “outdoor experiments”. And to achieve these goals, it recommends a dedicated allocation of $100-200 million (Rs 728-1,456 crore).

According to experts who spoke to Nature News, Joe Biden being in the Oval Office instead of Donald Trump is crucial: “many scientists say that Biden’s administration has the credibility to advance geoengineering research without rousing fears that doing so will merely displace regulations and other efforts to curb greenhouse gases, and give industry a free pass.”

This is a significant concern for many reasons – including, notably, countries’ differentiated commitments to ensuring outcomes specified in the Paris Agreement and the fact that climate is a global, not local, phenomenon.

Data from 1900 to 2017 indicates that US residents had the world’s ninth highest carbon dioxide emissions per capita; Indians were 116th. This disparity, which holds between the group of large developed countries and of large developing countries in general, has given rise to demands by the latter that the former should do more to tackle climate change.

The global nature of climate is a problem particularly for countries with industries that depend on natural resources like solar energy and seasonal rainfall. One potential outcome of geoengineering is that climatic changes induced in one part of the planet could affect outcomes in a faraway part.

For example, the US government sowed the first major seeds of its climate research programme in the late 1950s after the erstwhile Soviet Union set off three nuclear explosions underground to divert the flow of a river. American officials were alarmed because they were concerned that changes to the quality and temperature of water entering the Arctic Ocean could affect climate patterns.

For another, a study published in 2007 found that when Mt Pinatubo in the Philippines erupted in 1991, it spewed 20 million tonnes of sulphur dioxide that cooled the whole planet by 0.5º C. As a result, the amount of rainfall dropped around the world as well.

In a 2018 article, Rob Bellamy, a Presidential Fellow in Environment at the University of Manchester, had also explained why stratospheric aerosol injection is “a particularly divisive idea”:

For example, as well as threatening to disrupt regional weather patterns, it, and the related idea of brightening clouds at sea, would require regular “top-ups” to maintain cooling effects. Because of this, both methods would suffer from the risk of a “termination effect”: where any cessation of cooling would result in a sudden rise in global temperature in line with the level of greenhouse gases in the atmosphere. If we hadn’t been reducing our greenhouse gas emissions in the background, this could be a very sharp rise indeed.

A study published in 2018 had sought to quantify the extent of this effect – a likely outcome of, say, projects losing political favour or funding. The researchers created a model in which humans pumped five million tonnes of sulphur dioxide a year into the stratosphere for 50 years, and suddenly stopped. One of the paper’s authors told The Wire Science at the time: “This would lead to a rapid increase in temperature, two- to four-times more rapid than climate change without geoengineering. This increase would be dangerous for biodiversity and ecosystems.”

Prakash Kashwan, a political scientist at the University of Connecticut and a senior research fellow of the Earth System Governance Project, has also written for The Wire Science about the oft-ignored political and social dimensions of geoengineering.

He told the New York Times on March 25, “Once these kinds of projects get into the political process, the scientists who are adding all of these qualifiers and all of these cautionary notes” – such as “the steps urged in the report to protect the interests of poorer countries” – “aren’t in control”. In December 2018, Kashwan also advised caution in the face of scientific pronouncements:

The community of climate engineering scientists tends to frame geoengineering in certain ways over other equally valid alternatives. This includes considering the global average surface temperature as the central climate impact indicator and ignoring vested interests linked to capital-intensive geoengineering infrastructure. This could bias future R&D trajectories in this area. And these priorities, together with the assessments produced by eminent scientific bodies, have contributed to the rise of a de facto form of governance. In other words, some ‘high-level’ scientific pronouncements have assumed stewardship of climate geoengineering in the absence of other agents. Such technocratic modes of governance don’t enjoy broad-based social or political legitimacy.

For now, the NASEM report “does not in any way advocate deploying the technology, but says research is needed to understand the options if the climate crisis becomes even more serious,” according to Nature News. The report itself concludes thus:

The recommendations in this report focus on an initial, exploratory phase of a research program. The program might be continued or expand over a longer term, but may also shrink over time, with some or all elements eventually terminated, if early research suggests strong reasons why solar geoengineering should not be pursued. The proposed approaches to transdisciplinary research, research governance, and robust stakeholder engagement are different from typical climate research programs and will be a significant undertaking; but such efforts will enable the research to proceed in an effective, societally responsive manner.

Matthew Watson, a reader in natural hazards at the University of Bristol, had discussed a similar issue in conversation with Bellamy in 2018, including an appeal to our moral responsibilities the same way ‘geoengineers’ must be expected to look out for transnational and subnational effects:

Do you remember the film 127 Hours? It tells the (true) story of a young climber who, pinned under a boulder in the middle of nowhere, eventually ends up amputating his arm, without anaesthetic, with a pen knife. In the end, he had little choice. Circumstances dictate decisions. So if you believe climate change is going to be severe, you have no option but to research the options (I am not advocating deployment) as broadly as possible. Because there may well come a point in the future where it would be immoral not to intervene.

The Wire Science
March 30, 2021

Lord of the Rings Day

Here’s wishing you a Happy Lord of the Rings Day! (Previous editions: 2020, 2019, 2018, 2017, 2016, 2014.) On this day in the book, Frodo, Sam and Smeagol (with help from Gandalf, Aragon, Gimli, Legolas, Faramir, Eowyn, Theoden, Eomer, Treebeard and the Ents, Meriadoc, Peregrin, Galadriel, Arwen and many, many others) destroyed the One Ring in the fires of Orodruin, throwing down Barad-dûr, bringing about the end of Sauron the Deceiver and forestalling the Age of Orcs, and making way for peace on Middle Earth.

Even though my – rather our – awareness of the different ways in which Lord of the Rings and J.R.R. Tolkien’s literature more broadly are flawed increases every year, in the last year in particular I’ve come back to the trilogy more than before, finding both that it’s entwined in messy ways with various events in my life, having been the sole piece of fantasy I read between 1998 and 2005, and more importantly, because Lord of the Rings was more expansive than most similar work of its time, I often can’t help but see that much of what came after is responding to it in some way. (I know I’ve made this point before but, as in journalism, what stories we have available to tell doesn’t change just because we’re repeating ourselves. :D)

This said, I don’t know what Lord of the Rings means today, in 2021, simply because the last 15 months or so have been a lousy time for replenishing my creative energy. I haven’t been very able to think about stories, leave alone write them – but on the flip side, I’ve been very grateful for the work and energy of story writers and tellers, irrespective of how much of it they’ve been able to summon, whether one sentence or one book, or the forms in which they’ve been able to summon it, whether as a Wikipedia page, a blog post, a D&D quest or a user’s manual. I’m thankful for all the stories that keep us going just as I’m mindful that everything, even the alt text of images, is fiction. More power to anyone thinking of something and putting it down in words – and also to your readers.

Defending philosophy of science

From Carl Bergstrom’s Twitter thread about a new book called How Irrationality Created Modern Science, by Michael Strevens:

The Iron Rule from the book is, in Bergstrom’s retelling, “no use of philosophical reasoning in the mode of Aristotle; no leveraging theological or scriptural understanding in the mode of Descartes. Formal scientific arguments must be sterilised, to use Strevens’s word, of subjectivity and non-empirical content.” I was particularly taken by the use of the term ‘individual’ in the tweet I’ve quoted above. The point about philosophical argumentation being an “individual” technique is important, often understated.

There are some personal techniques we use to discern some truths but which we don’t publicise. But the more we read and converse with others doing the same things, the more we may find that everyone has many of the same stand-ins – tools or methods that we haven’t empirically verified to be true and/or legitimate but which we have discerned, based on our experiences, to be suitably good guiding lights.

I discovered this issue first when I read Paul Feyerabend’s Against Method many years ago, and then in practice when I found during reporting some stories that scientists in different situations often developed similar proxies for processes that couldn’t be performed in their fullest due to resource constraints. But they seldom spoke to each other (especially across institutes), thus allowing an ideal view of how to do something to crenellate even as almost every one did that something in a similarly different way.

A very common example of this is scientists evaluating papers based on the ‘prestigiousness’ and/or impact factors of the journals the papers are published in, instead of based on their contents – often simply for lack of time and proper incentives. As a result, ideas like “science is self-correcting” and “science is objective” persist as ideals because they’re products of applying the Iron Rule to the process of disseminating the products of one’s research.

But “by turning a lens on the practice of science itself,” to borrow Bergstrom’s words, philosophies of science allow us to spot deviations from the prescribed normal – originating from “Iron Rule Ecclesiastics” like Richard Dawkins – and, to me particularly, revealing how we really, actually do it and how we can become better at it. Or as Bergstrom put it: “By understanding how norms and institutions create incentives to which scientists respond …, we can find ways to nudge the current system toward greater efficiency.”

(It is also gratifying a bit to see the book as well as Bergstrom pick on Lawrence Krauss. The book goes straight into my reading list.)

anti coronavirus vaccine in small bottles placed on purple surface

COVID-19, due process and an SNR problem

At a press conference streamed live on March 18, the head of the European Medicines Agency (EMA) announced that the body – which serves as the European Union’s drug and vaccine regulator – had concluded that the AstraZeneca COVID-19 vaccine was not associated with unusual blood clots that some vaccine recipients had reported in multiple countries. The pronouncement marked yet another twist in the roller-coaster ride the embattled shot has experienced over the past few months. But it has also left bioethicists debating how it is that governments should respond to a perceived crisis over vaccines during a pandemic.

Over the last two weeks or so, a fierce debate raged after a relatively small subset of people who had received doses complained of developing blood clots related to potentially life-threatening conditions. AstraZeneca, a British-Swedish company, didn’t respond to the concerns at first even though the EMA and the WHO continued to hold their ground: that the vaccine’s benefits outweighed its risks, so people should continue to take it. However, a string of national governments, including those of Germany, France and Spain, responded by pausing its rollout while scientists assessed the risks of receiving the vaccine.

Aside from allegations that AstraZeneca tried to dress up a significant mistake during its clinical trials of the vaccine as a ‘discovery’ and cherry-picked data from the trials to have the shot approved in different countries, the company has also been grappling with the fact that the shot was less efficacious than is ideal against infections by new, more contagious variants of the novel coronavirus.

But at the same time, the AstraZeneca vaccine is also one of the more affordable ones that scientists around the world have developed to quell the COVID-19 pandemic – more so than the Pfizer and Moderna mRNA vaccines. AstraZeneca’s candidate is also easier to store and transport, and is therefore in high demand in developing and under-developed nations around the world. Its doses are being manufactured by two companies, in India and South Korea, although geographically asymmetric demand has forced an accelerating vaccination drive in one country to come at the cost of deceleration in another.

Shot in the arm

Now that the EMA has reached its verdict, most of the 20 countries who had hit the pause button have announced that they will resume use of the vaccine. However, the incident has spotlighted a not-unlikely problem with the global vaccination campaign, and which could recur if scientists, ethicists, medical workers and government officials don’t get together to decide where they can draw the line between abundant precaution and harm.

In fact, there are two versions of this problem: one in countries that have a functional surveillance system that responds to adverse events following immunisation (AEFIs) and one in countries that don’t. An example of the former is Germany, which, according to the New York Times, decided to pause the rollout based on seven reports of rare blood clots from a pool of 1.6 million recipients – a naïve incidence rate of 0.0004375%. But as rare disorders go, this isn’t a negligible figure.

One component of the post-AEFI response protocol is causality assessment, and one part of this is for experts to check if certain purported side-effects are clustered in time and then to compare those to the illness’s time distribution for a long time before the pandemic. It’s possible that such clustering could have prompted health officials in Germany and other countries to suspend the rollout.

The Times quoted a German health ministry statement saying, “The state provides the vaccine and therefore has special duties of care”. These care considerations include what the ministry understands to be the purpose of the rollout (to reduce deaths? To keep as many people healthy as possible?) read together with the fact that vaccines are like drugs except in one important way: they’re given to healthy – and not to sick – people. To quote Stephan Lewandowsky, an expert of risk communication at the University of Bristol, from Science:

“You’ve got to keep the public on board. And if the public is risk-averse, as it is in Europe … it may have been the right decision to stop, examine this carefully and then say, ‘The evidence, when considered transnationally, clearly indicates it is safe to go forward.’”

On the other hand is the simpler and opposing calculus of how many people didn’t develop blood clots after taking the vaccine, how many more people the virus is likely to have infected in the time the state withheld the vaccine, how many of them were at greater risk of developing complications due to COVID-19 – topped off by the fact of the vaccines being voluntary. On this side of the argument, the state’s carefulness is smothering, considering it’s using a top-down policy without accounting for local realities or the state’s citizens’ freedom to access or refuse the vaccine during a pandemic.

Ultimately there appears to be no one right answer, at least in a country where there’s a baseline level of trust that the decision-making process included a post-vaccination surveillance system that’s doing its job. Experts have also said governments should consider ‘mixed responses’ – like continuing rollouts while also continuing to examine the vaccines, given the possibility that a short-term review may have missed something a longer term exercise could find. One group of exerts in India has even offered a potential explanation.

The background rate

In countries where such a system doesn’t exist, or does but is broken, like India, there is actually one clear answer: to be transparent and accountable instead of opaque and intractable. For example, N.K. Arora, a member of India’s National COVID-19 Task Force, told The Hindu recently that while the body would consider post-vaccination data of AstraZeneca’s vaccine, it also believed the fraction of worrying cases to be “very, very low”. Herein lies the rub: how does it know?

As of early March, according to Arora, the Union health ministry had recorded “50-60” cases of AEFIs that may or may not be related to receiving either of the two vaccines in India’s drive, Covaxin and Covishield. (The latter is the name of AstraZeneca’s shot in India.) Reading this with Arora’s statements and some other facts of the case, four issues become pertinent.

First is the deceptively simple problem of the background rate. Journalist Priyanka Pulla’s tweets prompt multiple immediate concerns on this front. If India had reported 10 cases of disease X in 20 years, but 10 more cases show up within two weeks after receiving one dose of a vaccine, should we assume the vaccine caused them? No – but it’s a signal that we should check for the existence of a causal link.

Experts will need to answer a variety of questions here: How many people have disease X in India? How many people of a certain age-group and gender have disease X? How many people of different religious and/or ethnic groups have disease X? How many cases of disease X are we likely to have missed (considering disease-underreporting is a hallmark of Indian healthcare)? How many cases of disease X should we expect to find in the population being vaccinated in the absence of a vaccine? Do the 10 new cases, or any subset of them, have a common but invisible cause unrelated to the vaccine? Do we have the data for all these considerations?

Cornelia Betsch, a psychologist at the University of Erfurt, told Science that “most of the cases of rare blood disorders were among young women, the group where vaccine hesitancy already runs highest”. Can India confirm or deny that this trend is reflected in its domestic data as well? This seems doubtful. Sarah Iqbal reported for The Wire Science in September 2020 that “unequal access to health”, unequal exposure to potentially disease-causing situations, unequal representation in healthcare data and unequal understanding of diseases in non-cis-male bodies together already render statements like ‘women have better resistance to COVID-19’ ignorant at best. Being able to reliably determine and tackle sex-wise vaccine hesitancy seems like a tall order.

The second issue is easy to capture in one question, which also makes it harder to ignore: why hasn’t the government released reports or data about AEFIs in India’s COVID-19 vaccination drive after February 26, 2021?

On March 16, a group of 29 experts from around the country – including virologist T. Jacob John, who has worked with the Indian Council of Medical Research on seroprevalence surveys and has said skeptics of the Indian drug regulator’s Covaxin approval were “prejudiced against Indian science/product” – wrote to government officials asking for AEFI data. They said in their letter:

We note with concern that critical updates to the fact sheets recommended by the CDSCO’s Subject Expert Committee have not been issued, even though they are meant to provide additional guidance and clarify use of the vaccines in persons such as those with allergies, who are immunocompromised or using immunosuppressants, or using blood thinners/anticoagulants. There are gaps in AEFI investigations at the local level, affecting the quality of evidence submitted to State and National AEFI Committees who depend on these findings for making causality assessments. The National AEFI Committee also has a critical role in assessing cases that present as a cluster and to explore potential common pathways. In our letter dated January 31, 2021, we asked for details of all investigations into deaths and other serious AEFIs, as well as the minutes of AEFI monitoring committees, and details of all AEFI committee members and other experts overseeing the vaccine rollout. We have not received any response.

City of Omelas

The third issue is India’s compliance with AEFI protocols – which, when read together with Pulla’s investigation of Bharat Biotech’s response to a severe adverse event in its phase 3 trials for Covaxin, doesn’t inspire much confidence. For example, media reports suggest that medical workers around the country aren’t treating all post-vaccination complaints of ill-health, but especially deaths, on equal footing. “Currently, we are observing gaps in how serious adverse events are being investigated at the district level,” New Delhi-based health activist Malini Aisola told IndiaSpend on March 9. “In many instances local authorities have been quick to make public statements that there is no link to the vaccine, even before investigations and post mortem have taken place. In some cases there is a post mortem, in some cases there isn’t.”

Some news reports of people having died of heart-related issues at a point of time after taking Covishield also include quotes from doctors saying the victims were known to have heart ailments – as if to say their deaths were not related to the vaccine.

But in the early days of India’s COVID-19 epidemic, experts told The Wire that even when people with comorbidities, like impaired kidney function, died due to renal failure and tested positive for COVID-19 at the time of death, their passing could be excluded from the official deaths tally only if experts had made sure the two conditions were unrelated – and this is difficult. Having a life-threatening illness doesn’t automatically make it the cause of death, especially since COVID-19 is also known to affect or exacerbate some existing ailments, and vice versa.

Similarly, today, is the National AEFI Committee for the COVID-19 vaccination drive writing off deaths as being unrelated to the vaccine or are they being considered to be potential AEFIs? And is the committee deliberating on these possibilities before making a decision? The body needs to be transparent on this front a.s.a.p. – especially since the government has been gifting AstraZeneca’s shots to other countries and there’s a real possibility of it suppressing information about potential problems with the vaccine to secure its “can do no wrong” position.

Finally, there’s the ‘trolley problem’, as the Times also reported – an ethical dilemma that applies in India as well as other countries: if you do nothing, three people will get hit by a train and die; if you pull a lever, the train will switch tracks and kill one person. What do you do?

But in India specifically, this dilemma is modified by the fact that due process is missing; this changes the problem to one that finds better, more evocative expression in Ursula K. Le Guin’s short story The Ones Who Walk Away from Omelas (1973). Omelas is a fictitious place, like paradise on Earth, where everyone is happy and content. But by some magic, this is only possible if the city can keep a child absolutely miserable, wretched, with no hope of a better life whatsoever. The story ends by contemplating the fate of those who discover the city’s gory secret and decide to leave.

The child in distress is someone – even just one person – who has reported an AEFI that could be related to the vaccine they took. When due process plays truant, when a twisted magic that promises bliss in return for ignorance takes shape, would you walk away from Omelas? And can you freely blame those who hesitate about staying back? Because this is how vaccine hesitancy takes root.

The Wire
March 20, 2021

A tale of vortices, skyrmions, paths and shapes

There are many types of superconductors. Some of them can be explained by an early theory of superconductivity called Bardeen-Cooper-Schrieffer (BCS) theory.

In these materials, vibrations in the atomic lattice force the electrons in the material to overcome their mutual repulsion and team up in pairs, if the material’s temperature is below a particular threshold (very low). These pairs of electrons, called Cooper pairs, have some properties that individual electrons can’t have. One of them is that all Cooper pairs together form an exotic state of matter called a Bose-Einstein condensate, which can flow through the material with much less resistance than individuals electrons experience. This is the gist of BCS theory.

When the Cooper pairs are involved in the transmission of an electric current through the material, the material is an electrical superconductor.

Some of the properties of the two electrons in each Cooper pair can influence the overall superconductivity itself. One of them is the orbital angular momentum, which is an intrinsic property of all particles. If both electrons have equal orbital angular momentum but are pointing in different directions, the relative orbital angular momentum is 0. Such materials are called s-wave superconductors.

Sometimes, in s-wave superconductors, some of the electric current – or supercurrent – starts flowing in a vortex within the material. If these vortices can be coupled with a magnetic structure called a skyrmion, physicists believe they can give rise to some new behaviour previously not seen in materials, some of them with important applications in quantum computing. Coupling here implies that a change in the properties of the vortex should induce changes in the skyrmion, and vice versa.

However, physicists have had a tough time creating a vortex-skyrmion coupling that they can control. As Gustav Bihlmayer, a staff scientist at the Jülich Research Centre, Germany, wrote for APS Physics, “experimental studies of these systems are still rare. Both parts” of the structures bearing these features “must stay within specific ranges of temperature and magnetic-field strength to realise the desired … phase, and the length scales of skyrmions and vortices must be similar in order to study their coupling.”

In a new paper, a research team from Nanyang Technical University, Singapore, has reported that they have achieved just such a coupling: they created a skyrmion in a chiral magnet and used it to induce the formation of a supercurrent vortex in an s-wave superconductor. In their observations, they found this coupling to be stable and controllable – important attributes to have if the setup is to find practical application.

A chiral magnet is a material whose internal magnetic field “typically” has a spiral or swirling pattern. A supercurrent vortex in an electrical superconductor is analogous to a skyrmion in a chiral magnet; a skyrmion is a “knot of twisting magnetic field lines” (source).

The researchers sandwiched an s-wave superconductor and a chiral magnet together. When the magnetic field of a skyrmion in the chiral magnet interacted with the superconductor at the interface, it induced a spin-polarised supercurrent (i.e. the participating electrons’ spin are aligned along a certain direction). This phenomenon is called the Rashba-Edelstein effect, and it essentially converts electric charge to electron spin and vice versa. To do so, the effect requires the two materials to be in contact and depends among other things on properties of the skyrmion’s magnetic field.

There’s another mechanism of interaction in which the chiral magnet and the superconductor don’t have to be in touch, and which the researchers successfully attempted to recreate. They preferred this mechanism, called stray-field coupling, to demonstrate a skyrmion-vortex system for a variety of practical reasons. For example, the chiral magnet is placed in an external magnetic field during the experiment. Taking the Rashba-Edelstein route means to achieve “stable skyrmions at low temperatures in thin films”, the field needs to be stronger than 1 T. (Earth’s magnetic field measures 25-65 µT.) Such a field could damage the s-wave superconductor.

For the stray-field coupling mechanism, the researchers inserted an insulator between the chiral magnet and the superconductor. Then, when they applied a small magnetic field, Bihlmayer wrote, the field “nucleated” skyrmions in the structure. “Stray magnetic fields from the skyrmions [then] induced vortices in the [superconducting] film, which were observed with scanning tunnelling spectroscopy.”


Experiments like this one reside at the cutting edge of modern condensed-matter physics. A lot of their complexity resides in scientists being able to closely control the conditions in which different quantum effects play out, using similarly advanced tools and techniques to understand what could be going on inside the materials, and to pick the right combination of materials to use.

For example, the heterostructure the physicists used to manifest the stray-field coupling mechanism had the following composition, from top to bottom:

  • Platinum, 2 nm (layer thickness)
  • Niobium, 25 nm
  • Magnesium oxide, 5 nm
  • Platinum, 2 nm

The next four layers are repeated 10 times in this order:

  • Platinum, 1 nm
  • Cobalt, 0.5 nm
  • Iron, 0.5 nm
  • Iridium, 1 nm

Back to the overall stack:

  • Platinum, 10 nm
  • Tantalum, 2 nm
  • Silicon dioxide (substrate)

The first three make up the superconductor, the magnesium oxide is the insulator, and the rest (except the substrate) make up the chiral magnet.

It’s possible to erect a stack like this through trial and error, with no deeper understanding dictating the choice of materials. But when the universe of possibilities – of elements, compounds and alloys, their shapes and dimensions, and ambient conditions in which they interact – is so vast, the exercise could take many decades. But here we are, at a time when scientists have explored various properties of materials and their interactions, and are able to engineer novel behaviours into existence, blurring the line between discovery and invention. Even in the absence of applications, such observations are nothing short of fascinating.

Applications aren’t wanting, however.


quasiparticle is a packet of energy that behaves like a particle in a specific context even though it isn’t actually one. For example, the proton is a quasiparticle because it’s really a clump of smaller particles (quarks and gluons) that together behave in a fixed, predictable way. A phonon is a quasiparticle that represents some vibrational (or sound) energy being transmitted through a material. A magnon is a quasiparticle that represents some magnetic energy being transmitted through a material.

On the other hand, an electron is said to be a particle, not a quasiparticle – as are neutrinos, photons, Higgs bosons, etc.

Now and then physicists abstract packets of energy as particles in order to simplify their calculations.

(Aside: I’m aware of the blurred line between particles and quasiparticles. For a technical but – if you’re prepared to Google a few things – fascinating interview with condensed-matter physicist Vijay Shenoy on this topic, see here.)

We understand how these quasiparticles behave in three-dimensional space – the space we ourselves occupy. Their properties are likely to change if we study them in lower or higher dimensions. (Even if directly studying them in such conditions is hard, we know their behaviour will change because the theory describing their behaviour predicts it.) But there is one quasiparticle that exists in two dimensions, and is quite different in a strange way from the others. They are called anyons.

Say you have two electrons in an atom orbiting the nucleus. If you exchanged their positions with each other, the measurable properties of the atom will stay the same. If you swapped the electrons once more to bring them back to their original positions, the properties will still remain unchanged. However, if you switched the positions of two anyons in a quantum system, something about the system will change. More broadly, if you started with a bunch of anyons in a system and successively exchanged their positions until they had a specific final arrangement, the system’s properties will have changed differently depending on the sequence of exchanges.

This is called path dependency, and anyons are unique in possessing this property. In technical language, anyons are non-Abelian quasiparticles. They’re interesting for many reasons, but one application stands out. Quantum computers are devices that use the quantum mechanical properties of particles, or quasiparticles, to execute logical decisions (the same way ‘classical’ computers use semiconductors). Anyons’ path dependency is useful here. Arranging anyons in one sequence to achieve a final arrangement can be mapped to one piece of information (e.g. 1), and arranging anyons by a different sequence to achieve the same final arrangement can be mapped to different information (e.g. 0). This way, what information can be encoded depends on the availability of different paths to a common final state.

In addition, an important issue with existing quantum computers is that they are too fragile: even a slight interaction with the environment can cause the devices to malfunction. Using anyons for the qubits could overcome this problem because the information stored doesn’t depend on the qubits’ existing states but the paths that they have taken there. So as long as the paths have been executed properly, environmental interactions that may disturb the anyons’ final states won’t matter.

However, creating such anyons isn’t easy.

Now, recall that s-wave superconductors are characterised by the relative orbital angular momentum of electrons in the Cooper pairs being 0 (i.e. equal but in opposite directions). In some other materials, it’s possible that the relative value is 1. These are the p-wave superconductors. And at the centre of a supercurrent vortex in a p-wave superconductor, physicists expect to find non-Abelian anyons.

So the ability to create and manipulate these vortices in superconductors, as well as, more broadly, explore and understand how magnet-superconductor heterostructures work, is bound to be handy.


The Nanyang team’s paper calls the vortices and skyrmions “topological excitations”. An ‘excitation’ here is an accumulation of energy in a system over and above what the system has in its ground state. Ergo, it’s excited. A topological excitation refers to energy manifested in changes to the system’s topology.

On this subject, one of my favourite bits of science is topological phase transitions.

I usually don’t quote from Wikipedia but communicating condensed-matter physics is exacting. According to Wikipedia, “topology is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending”. For example, no matter how much you squeeze or stretch a donut (without breaking it), it’s going to be a ring with one hole. Going one step further, your coffee mug and a donut are topologically similar: they’re both objects with one hole.

I also don’t like the Nobel Prizes but some of the research that they spotlight is nonetheless awe-inspiring. In 2016, the prize was awarded to Duncan Haldane, John Kosterlitz and David Thouless for “theoretical discoveries of topological phase transitions and topological phases of matter”.

David Thouless in 1995. Credit: Mary Levin/University of Washington

Quoting myself from 2016:

There are four popularly known phases of matter: plasma, gas, liquid and solid. If you cooled plasma, its phase would transit to that of a gas; if you cooled gases, you’d get a liquid; if you cooled liquids, you’d get a solid. If you kept cooling a solid until you were almost at absolute zero, you’d find substances behaving strangely because, suddenly, quantum mechanical effects show up. These phases of matter are broadly called quantum phases. And their phase transitions are different from when plasma becomes a gas, a gas becomes a liquid, and so on.

A Kosterlitz-Thouless transition describes a type of quantum phase transition. A substance in the quantum phase, like all substances, tries to possess as low energy as possible. When it gains some extra energy, it sheds it. And how it sheds it depends on what the laws of physics allow. Kosterlitz and Thouless found that, at times, the surface of a flat quantum phase – like the surface of liquid helium – develops vortices, akin to a flattened tornado. These vortices always formed in pairs, so the surface always had an even number of vortices. And at very low temperatures, the vortices were always tightly coupled: they remained close to each other even when they moved across the surface.

The bigger discovery came next. When Kosterlitz and Thouless raised the temperature of the surface, the vortices moved apart and moved around freely, as if they no longer belonged to each other. In terms of thermodynamics alone, the vortices being alone or together wouldn’t depend on the temperature, so something else was at play. The duo had found a kind of phase transition – because it did involve a change in temperature – that didn’t change the substance itself but only a topological shift in how it behaved. In other words, the substance was able to shed energy by coupling the vortices.

Reality is so wonderfully weird. It’s also curious that some concepts that seemed significant when I was learning science in school (like invention versus discovery) and in college (like particle versus quasiparticle) – concepts that seemed meaningful and necessary to understand what was really going on – don’t really matter in the larger scheme of things.

Reimagining science, redux

This article on Founding Fuel has some great suggestions I thought, but it merits sharing with a couple caveats.

First, in narratives about making science “easier to do”, commentators give science-industry linkages more play than science-society ones. This has been true in the past and continues to be. We remember and periodically celebrate the work of Shanti Swarup Bhatnagar and M. Visveshwaraya, but not with nearly equal fanfare that of, say, Yash Pal or the members of the Hoshangabad Science Teaching Programme.

In public dialogues about making the work of scientists more relevant, writers and TV panellists often touch on spending more money to setup larger, better supplied labs and improving ties between the labs and industry, where research is translated into product or service. Spending more on science is necessary, as is the need to support collaborations, regularise funding and grant-giving, improve working conditions for teachers, etc.

More broadly, I acknowledge that the problem is that there isn’t enough good science happening in the country, that the author is recommending various ways in which science-industry linkages and tweaks within the science ecosystem can both change this for the better, and that science-society linkages are unlikely to be of help on this front. However, could this be because we’re asking the wrong question?

That is, what science and industry can do for each other becomes relevant if what we’re seeking is the growth of science, as defined by some parameters (number of citations, number of patents, etc.), as an enterprise in and of itself – as if its fortunes and outcomes weren’t already yoked to other societal endeavours. Growth for growth’s sake. Science-society linkages become relevant on the other hand when the parameters are, say, research and academic liberties, extent of public participation, distribution of opportunities, freedom from government interference, etc. – when quantitative growth is both difficult and more aligned with nation-building.

Ultimately, we don’t need a science that becomes easier to do at the expense of not thinking about whether it needs to be done, or done differently. This is not a veiled comment against ‘blue sky’ research, which must continue, but is directed against ‘black sky’ research – which goes on to pollute our air and water, drills forestland for oil, dams rivers and destabilises ecosystems without thought for the consequences.

Nevertheless, in a system designed increasingly to incentivise working with the private sector, to self-finance one’s work through patents and other licenses, and to translate research into arbitrarily defined “useful” things, such thinking can only become more penalised, more unfavourable. And the science that is rolled into technologies will only be industry friendly, which in the current political climate means Ambani- and/or Adani-friendly, to the detriment of everyone else, especially those on the bottom rungs of society.

Second, the article’s author uses Nobel Prize-winning work to describe presumably the extent of what is possible when faculty members at an institute work together or when researchers collaborate with their younger peers. But in the process he frames ‘collaborations that produce Nobel Prizes’ as desirable. This is a problem because doing so overlooks collaborations that didn’t win Nobel Prizes, because laureates are often white men (non-white, non-cis-men may not be able to ‘breach’ such ‘in-groups’ because of structural factors even as solutions to break these barriers are ignored in favour of a flatter ‘prize-winning’ one), and because “Nobel-Prize-winning collaborations” is an oxymoron.

The last is easiest to see: the prizes are awarded only to three people at a time whereas the author himself quotes a study that found that the number of authors per scientific paper increased from 3.2 to 4.4 in 1996-2015.


As a corrective of sorts, to infuse deliberations prompted by the Founding Fuel article with what a focus on industry-oriented development leaves out, let me quote at length from an essay Mukund Thattai published with The Wire three years ago, exploring the existence of “an Indian way of doing science” (emphases mine):

There is a strong case to fund science for the same reason we fund the arts or sport. Science is a cultural activity: it reveals unexpected beauty in the everyday; it captures the imagination of children; it attempts to answer some of humanity’s biggest questions about where we came from. Moreover, scientific ideas can be a potent component of the process by which society arrives at collective decisions about the future. Among the strongest reasons a resource-limited country such as India should fund curiosity-driven science is that the nature of future crises cannot be predicted.

It is impossible to micromanage the long-term research agenda, so the only hope is to cast a wide net. A broad and deep scientific community is a valuable resource that can be called upon to give its inputs on a variety of issues. They cannot be expected to always deliver a solution but can be expected to provide the best possible information available at any time. In this consultative process, it is crucially important to not privilege scientific experts over other participants in the discussion.

… Science thrives within a diversity of questions and methods, a diversity of institutional environments, and a diversity of personal experiences of individual scientists. In the modern era, the practice of science has moved to a more democratic mode, away from the idea of lone geniuses and towards a collective effort of creating hypotheses and sharing results. Any tendency toward uniformity and career professionalisation dilutes and ultimately destroys this diversity. As historian of science Dhruv Raina describes it, a science that is vulnerable to the “pressures of government” is “no longer an open frontier of critical activity”. Instead, science must become “social and reflexive”.

Ideas and themes must bubble up from the broadest possible community. In India, access to such a process is limited by the accident of one’s mother tongue and social class, and this must change. Anyone who wants to should have the opportunity to understand what scientists are doing. Ultimately, this must involve not only scientists but also social scientists, historians, philosophers, artists and communicators – and the public at large.

… Is there such a thing as an “Indian way” of doing science? Science in the abstract is said to transcend national boundaries. In practice it is strongly influenced by local experiences and local history. Unfortunately, even as national missions have faded to the background, they have been replaced by an imitation of Western fashions. It has become common to look to high-profile journals and conferences as arbiters of questions worth asking. This must stop. The key to revitalising Indian science is the careful choice of rich questions. These questions could be driven by new national missions that bring the excitement of a collective effort. Or they could be inspired by observing the complex interactions of the world immediately around us.

There is a great deal of scholarship and scientific inquiry that can arise from the study of India’s traditional knowledge systems. The country’s enormous biodiversity and human genetic diversity are an exciting and bottomless source of scientific puzzles and important secrets. Such questions would allow for a deeper two-way engagement with India’s people. This is not to say Indian scientists cannot work on internationally important problems – quite the opposite. The scientific community in India, working within their own unique contexts, could become the source of important problems that anyone in the world would be excited to work on.

… The internationalisation of science is an important goal in and of itself. While it stimulates cross-fertilisation of ideas and pushes up standards within science, it also creates opportunities for broader global discussions and engagements. The unfortunate hurdles which curtail the ability of Indian academics and students to travel abroad, and the enormous difficulty foreign academics face in obtaining necessary permissions to visit their colleagues in India, serve no purpose. In spite of all this, there is a healthy trend towards stronger international links.

Academic scientists have long played dual roles as teachers and researchers. Within India, science has a remarkably broad appeal. Public science talks are standing-room-only affairs, and famous scientists receive the kind of adulation typically reserved for movie stars. Students across the country are excited about science. Many aspire to become scientists themselves.

Historically, engineering and medical colleges have attracted scientifically-minded students, but this is changing. The Indian Institutes of Science Education and Research have now been running undergraduate programs for over a decade in cities across India. These institutions are to science what the IITs are to engineering, attracting some of the brightest students each year. Science programs within public universities have not fared as well, and must seize every opportunity to reinvent themselves. A science curriculum based not on dry facts but on the history and process of discovery can form the base of a broad education, in conjunction with the humanities and the arts.

Physicists produce video of time crystal in action 😱

Have you heard of time crystals?

A crystal is any object whose atoms are arranged in a fixed pattern in space, with the pattern repeating itself. So what we typically know to be crystals are really space crystals. We didn’t have to bother with the prefix because space crystals were the only kind of crystals we knew until time crystals came along.

Time crystals are crystalline objects whose atoms exhibit behaviour that repeats itself in time, as periodic events. The atoms of a time crystal spin in a fixed and coordinated pattern, changing direction at fixed intervals.

Physicists sometimes prefer to quantify these spin patterns as quasiparticles to simplify their calculations. Quasiparticles are not particles per se. To understand what they are, consider a popular one called phonons. Say you strike a metal spoon on the table, producing a mild ringing sound. This sound is the result of sound waves propagating through the metal’s grid of atoms, carrying vibrational energy. You could also understand each wave to be a particle instead, carrying the same amount of energy that each sound wave carries. These quasiparticles are called phonons.

In the same way, patterns of spinning charged particles also carry some energy. Each electron in an atom, for example, generates a tiny magnetic field around itself as it spins. The directions in which the electrons in a material spin collectively determine many properties of the material’s macroscopic magnetic field. Sometimes, shifts in some electrons’ magnetic fields could set off a disturbance in the macroscopic field – like waves of magnetic energy rippling out. You could quantify these ‘spin waves’ in the form of quasiparticles called magnons. Note that magnons quantify spin waves; the waves themselves can be from electrons, ions or other charged particles.

As quasiparticles, magnons behave like a class of particles called bosons – which are nature’s force-carriers. Photons are bosons that mediate the electromagnetic force; W and Z bosons mediate the weak nuclear force responsible for radioactivity; gluons mediate the strong nuclear force, which carries the energy you see released by nuclear weapons; scientists have hypothesised the existence of gravitons, for gravity, but haven’t found them yet. Like all bosons, magnons don’t obey Pauli’s exclusion principle and they can be made to form exotic states of matter like superfluids and Bose-Einstein condensates.

Other quasiparticles include excitons and polarons (useful in the study of electronic circuits), plasmons (of plasma) and polaritons (of light-matter interactions).

Physicist Frank Wilczek proposed the existence of time crystals in 2012. One reason time crystals are interesting to physicists is that they break time-translation symmetry in their ground state.

This statement has two important parts. The first concerns time-translation symmetry-breaking. Scientists assume the laws of physics are the same in all directions – yet we still have objects like crystals, whose atoms are arranged in specific patterns that repeat themselves. Say the atoms of a crystal are arranged in a hexagonal pattern. If you kept the position of one atom fixed and rotated the atomic lattice around it or if you moved to the left or right of that atom, in both cases by an arbitrary amount, your view of the lattice will also change. This happens because crystals break spatial symmetry. Similarly, time symmetry is broken if an event repeats itself in time – like, say, a magnetic field whose structure changes between two shapes over and over.

The second part of the statement concerns the (thermodynamic) ground state – the state of any quantum mechanical system when it has its lowest possible energy. (‘Quantum mechanical system’ is a generic term for any system – like a group of electrons – in which quantum mechanical effects have the dominant influence on the system’s state and behaviour. An example of a non-quantum-mechanical system is the Solar System, where gravity dominates.) Wilczek revived interest in time crystals as objects that break time-translation symmetry in their ground states. Put another way, they are quantum mechanical systems whose constituent particles perform a periodic activity without changing the overall energy of the system.

The advent of quantum mechanics and relativity theory in the early 20th century alerted physicists to the existence of various symmetries and, through the work of Emmy Noether, their connection to different conservation laws. For example, a system in which the laws of nature were the same throughout history and will be in future – i.e. preserves time-translation symmetry – will also conserve energy. Does this mean time crystals violate the law of conservation of energy? No. The atoms’ or electrons’ spin is not the result of the electrons’ or atoms’ kinetic energy but is an inherent quantum mechanical property. This energy can’t be used to perform work the same way, say, a motor can pump water. The system’s total energy is still conserved.

Now, physicists from Germany have reported that they have observed a time crystal ‘in action’ – a feat notable on three levels. First, it’s impressive that they have created a time crystal in the first place (even if they are not the first to do so). The researchers passed radio frequency waves through a strip of nickel-iron alloy a few micrometers wide. According to ScienceAlert, this ‘current’ “produced an oscillating magnetic field on the strip, with magnetic waves travelling onto it from both ends”. As a result, they “stimulated the magnons in the strip, and these moving magnons then condensed into a repeating pattern”.

Second, while quasiparticles are not actual particles per se, they exhibit some properties of particles. One of them is scattering, like two billiard balls might bounce off each other to go off in different directions at different speeds. Similarly, the researchers created more magnons and scattered them off the magnons involved in the repeating pattern. The post-scatter magnons had a shorter wavelength than they did originally, in line with expectations, and the researchers also found that they could control this wavelength by adjusting the frequency of the stimulating radio waves.

An ability to control such values often means the process could have an application. The ability to precisely manipulate systems involving the spin of electrons has evolved into a field called spintronics. Like electronics makes use of the electrical properties of subatomic particles, spintronics is expected to leverage spin-related properties and enable ultra-fast hard-drives and other technologies.

Third, the researchers were able to produce a video showing the magnons moving around. This is remarkable because the thing that makes a time crystal so unique is the result of quantum mechanical processes, which are microscopic in nature. It’s not often that you can observe their effects on the macroscopic scale. The principal reason the researchers were able achieve this is feat is the method they used to create the time crystal.

Previous efforts to create time crystals have used systems like quantum gases and Bose-Einstein condensates, both of which require sophisticated apparatuses to work with, in ultra-cold conditions, and whose behaviour researchers can track only by carefully measuring their physical and other properties. On the other hand, the current experiment works at room temperature and uses a more ‘straightforward’ setup that is also fairly large-scale – enough to be visible under an X-ray microscope.

Working this microscope is no small feat, however. Charged particles emit radiation when they’re accelerated along a circular path. An accelerator called BESSY II in Berlin uses this principle to produce X-rays. Then the microscope, called MAXYMUS, focuses the X-rays onto an extremely small spot – a few nanometers wide – and “scans across the sample”, according to its official webpage. A “variety of X-ray detectors”, including a camera, observe how the X-rays interact with the sample to produce the final images. Here’s the resulting video of the time crystal, captured at 40 billion frames per second:

I asked one of the paper’s coauthors, Joachim Gräfe, a research group leader in the department of modern magnetic systems at the Max Planck Institute for Intelligent Systems, Stuttgart, two follow-up questions. He was kind enough to reply in detail; his answers are reproduced in full below:

  1. A time crystal represents a system that breaks time translation symmetry in its ground state. When you use radio-frequency waves to stimulate the magnons in the nickel-iron alloy, the system is no longer in its ground state – right?

The ground state debate is the interesting part of the discussion for theoreticians. Our paper is more about the experimental observation and an interaction towards a use case. It is argued that a time crystal cannot be a thermodynamic ground state. However, it is in a ground state in a periodically alternating potential, i.e. a dynamic ground state. The intriguing thing about time crystals is that they are in ground states in these periodically alternating potentials, but they do not/will not necessarily have the same periodicity as the alternating potential.

The condensation of the magnonic time crystal is a ground state of the system in the presence of the RF field (the periodically alternating potential), but it will dissipate through damping when the RF field is switched off. However, even in a system without damping, it would not form without the RF field. It really needs the periodically alternating potential. It is really a requirement to have a dynamic system to have a time crystal. I hope I have not confused you more than before my answer. Time crystals are quite mind boggling. 😵🤯

  1. Previous experiments to observe time crystals in action have used sophisticated systems like quantum gases and Bose-Einstein condensates (BECs). Your experiment’s setup is a lot more straightforward, in a manner of speaking. Why do you think previous research teams didn’t just use your setup? Or does your setup have any particular difficulty that you overcame in the course of your study?

Interesting question. With the benefit of hindsight: our time crystal is quite obvious, why didn’t anybody else do it? Magnons only recently have emerged … as a sandbox for bosonic quantum effects (indeed, you can show BEC and superfluidity for magnons as well). So it is quite straightforward to turn towards magnons as bosons for these studies. However, our X-ray microscope (at the synchrotron light source) was probably the only instrument at the time to have the required spatial and temporal resolution with magnetic contrast to shoot a video of the space-time crystal. Most other magnon detection methods (in the lab) are indirect and don’t yield such a nice video.

On the other hand, I believe that the interesting thing about our paper is not that it was incredibly difficult to observe the space time crystal, but that it is rather simple to create one. Apparently, you can easily create a large (magnonic) space time crystal at room temperature and do something with it. Showing that it is easy to create a space time crystal opens this effect up for technological exploitation.

Anti-softening science for the state

The group of ministers (GoM) report on “government communication” has recommended that the government promote “soft topics” in the media like “yoga” and “tigers”. We can only speculate what this means, and that shouldn’t be hard. The overall spirit of the document is insecurity and paranoia, manifested as fantasies of reining in the country’s independent media into doing the government’s bidding. The promotion of “soft” stories is in line with this aspiration – “soft” here can only mean stories that don’t criticise the government, its actions or policies, and be like ‘harmless entertainment’ for a politically inert audience. It’s also no coincidence that the two examples on offer of such stories skirt the edges of health and environmental journalism; other examples are sure to include reports of scientific discoveries.

Science is closely related to the Indian state in many ways. The current government in particular, in power since 2014, has been promoting application-oriented R&D (a bias especially visible in budgetary allocations); encouraging ill-prepared research facilities to self-finance; privileging certain private interests (esp. the Reliance and Adani groups) vis-à-vis natural resources like coal, coastal zones and spectrum allocations; pillaging India’s ecological commons for industrialisation; promoting pseudoscience (which further disempowers those closer to society’s margins); interfering at universities by appointing vice-chancellors friendly to the ruling party (and if that doesn’t work, jailing students on ridiculous charges that include dissent); curtailing academic freedom; and hounding after scientists and institutions that threaten its preferred narratives.

With this in mind, it’s important for science journalism outlets and science journalists to not become complicit – inadvertently or otherwise – in the state project to “soften” science, and start reporting, if they aren’t already, on issues with a closer eye on their repercussions on the wider society. The idea that science journalism can or should be objective the way science is is nonsensical because the idea that science is an objective enterprise is nonsensical. The scientific method is a technique to obtain information about the natural universe while steadily subtracting the influence of human biases and other limitations. However, what scientists choose to study, how they design their studies and what is ultimately construed to be knowledge are all deeply human enterprises.

On top of this, science journalism is driven by journalists’ sense of good and bad: We write favourably about the former and argue against the latter. We write about some telescope unravelling a long-standing cosmogonic problem and also publish an article calling out homeopathy’s bullshit. We write a scientific paper that uses ingenious methods to prove its point and also call out Indian academia as an unsafe space for queer-trans people.

Some have advanced a defence that simply focusing on “good science” can inculcate in the audience a sense of what is “worthy” and “desirable” while denying “bad science” the platform and publicity it seeks. This is objectionable on two counts.

First, who decides what is “worthy”? For example, some scientists, especially in the ‘senior’ cadre and the more influential and/or powerful for it, make this choice by deferring to the wisdom of scientific journals, chosen according to their impact factors, and what the journals have deemed worthy of publishing. But abiding by this heuristic only means we continue to participate in and extend the lifetime of the existing ways of knowledge production that privilege white scientists, male scientists and richer scientists – and sensational positive results on topics that the scientists staffing the journals’ editorial boards would like to focus on.

Second, being limited to goodness at a time when badness abounds is bad, at least severely tone-deaf (but I’m disinclined to be so charitable). Very broadly, that science is inherently amoral is a pithy factoid by this point. There have been far too many incidents in history for anyone to still be able to overlook, in good faith, the fact that science’s prescriptions unguided by human morals and values are quite likely to lead to humanitarian disasters. We may even be living through one such. Scientists’ rapid and successful development of new vaccines against a new pathogen was followed by a global rush to acquire enough doses. But the world’s industrial and economic powers have ensured that the strongest among them have enough to vaccine their entire populations more than once, have blocked petitions at global fora to loosen patents on these vaccines to expand manufacturing and distribution, have forced desperate countries to purchase doses at prices higher than those for developed blocs like the EU, and have allowed corporate behemoths to make monumental profits even as they force third-world nations to pledge sovereign assets to secure supplies. It’s fallacious to claim scientific labour makes the world a better place when the fruits of such labour must still be filtered, like so much else, through the capitalist sieve.

There are many questions for the science journalist to consider here: why have some communities in certain countries been affected more than others? Why is there so little data on the vaccines’ consequences for pregnant women? Do we know enough to discuss the pandemic’s effects on women? Why, at a time when so many scientists and engineers were working to design new ventilators, was there no unified standard to ensure usability? If the world has demonstrated that it’s possible to design, test, manufacture and administer vaccines against a new virus in such a short time, why have we been waiting so long for effective defences against neglected tropical diseases? How do the racial, gender and ethnic identifies of clinical trials affect trial outcomes? Is it ethical for countries that hosted vaccine clinical trials to get the first doses? Should we compulsorily prohibit patents on drugs, therapies and devices important to ending pandemics? If so, what might the consequences be for drug development? And what good is a vaccine if we can’t also ensure all the world’s 7.x billion people can be vaccinated simultaneously?

The pandemic isn’t a particularly ‘easy’ example either. For example, if the government promises to develop new supercomputers, who can use them and what problems will they be used to solve? How can we improve the quality and quantity of research conducted at institutes funded by state governments? Why do so many scientists at public universities plagiarise scientific papers? On what basis are the winners of the S.S. Bhatnagar Award chosen? Should we formally do away with subscription-funded scientific journals in favour of open-access publishing, overlay journals and post-publication peer-review? Is methane really a “clean fuel” even though its extraction and transportation will impose a considerable dirty cost? Why can’t we have more GM foods in the market even though the science is ‘good’? Is it worthwhile to invest Rs 10,000 crore in a human spaceflight programme that lacks long-term vision? And so forth.

Simply focusing on “good science” at our present time is not enough. I also reject the argument that it’s not for science journalists to protect or defend science simply because science, whatever it’s interpreted to mean, is not the preserve of scientists. As an enterprise rooted in its famous method, science is a tool of empowerment: it encourages discovery and deliberation; I’m not sure if it’s fair to say it encourages dissent as well but there is evidence that science can accommodate it without resorting to violence and subjugation.

It’s not for nothing that I’m more comfortable holding up an aspirin tablet for someone with a headache than a jar of leaves from the Patanjali Ayurved stable: being able to know how and why something works is power in the same way knowing how the pharmaceutical industry manipulates markets, how to file an RTI application, what makes an FIR valid or invalid, what the election commission’s model code of conduct stipulates or what kind of land a mall can be built on is power. All of it represents control, especially the ability to say ‘no’ and mean it.

This is ultimately what the GoM report fantasises about – and what the present government desires: the annulment of individual and institutional resistance, one subset of which is the neutralisation of science’s ability to provoke questions about atoms and black holes as much as about the circumstances in which scientists study them, about the nature, utility and purpose of knowledge, and the relationships between science, capital and the state.


Addendum

In January 2020, the Office of the Principal Scientific Adviser (PSA) to the Government of India organised a meeting with science journalists and communicators from around the country to discuss what the two parties could do for each other. Us journalists and communicators aired a lot of grievances during the meeting as well as suggestions on fixing long-standing and/or particularly thorny problems (some notes here).

In light of the government’s renewed attention on curbing press freedom and ludicrous suggestions in the report, such as one by S. Gurumurthy that the news should be a “mixture of truth and untruth”, I’m not sure where that leaves the PSA’s plans for future consultation nor – considering parts of the report seemingly manufactured consent – whether good-faith consultation will be possible going ahead. I can only hope that members of this community at least evoke and keep the faith.