“Why has no Indian won a science Nobel this year?”

For all their flaws, the science Nobel Prizes – at the time they’re announced, in the first week of October every year – provide a good opportunity to learn about some obscure part of the scientific endeavour with far-reaching consequences for humankind. This year, for example, we learnt about attosecond physics, quantum dots, and invitro transcribed mRNA. The respective laureates had roots in Austria, France, Hungary, Russia, Tunisia, and the U.S. Among the many readers that consume articles about these individuals’ work with any zest, the science Nobel Prizes’ announcement is also occasion for a recurring question: how come no scientist from India – such a large country, of so many people with diverse skills, and such heavy investments in research – has won a prize? I thought I’d jot down my version of the answer in this post. There are four factors:

1. Missing the forest for the trees – To believe that there’s a legitimate question in “why has no Indian won a science Nobel Prize of late?” is to suggest that we don’t consider what we read in the news everyday to be connected to our scientific enterprise. Pseudoscience and misinformation are almost everywhere you look. We’re underfunding education, most schools are short-staffed, and teachers are underpaid. R&D allocations by the national government have stagnated. Academic freedom is often stifled in the name of “national interest”. Students and teachers from the so-called ‘non-upper-castes’ are harassed even in higher education centres. Procedural inefficiencies and red tape constantly delay funding to young scholars. Pettiness and politicking rule many universities’ roosts. There are ill-conceived limits on the use, import, and export of biological specimens (and uncertainty about the state’s attitude to it). Political leaders frequently mock scientific literacy. In this milieu, it’s as much about having the resources to do good science as being able to prioritise science.

2. Historical backlog – This year’s science Nobel Prizes have been awarded for work that was conducted in the 1980s and 1990s. This is partly because the winning work has to have demonstrated that it’s of widespread benefit, which takes time (the medicine prize was a notable exception this year because the pandemic accelerated the work’s adoption), and partly because each prize most often – but not always – recognises one particular topic. Given that there are several thousand instances of excellent scientific work, it’s possible, on paper, for the Nobel Prizes to spend several decades awarding scientific work conducted in the 20th century alone. Recall that this was a boom time for science, with the advent of quantum mechanics and the theories of relativity, considerable war-time investment and government support, followed by revolutions in electronics, materials science, spaceflight, genetics, and pharmaceuticals, and then came the internet. It was also the time when India was finding its feet, especially until economic liberalisation in the early 1990s.

3. Lack of visibility of research – Visibility is a unifying theme of the Nobel laureates and their work. That is, you need to do good work as well as be seen to be doing that work. If you come up with a great idea but publish it in an obscure journal with no international readership, you will lose out to someone who came up with the same idea but later, and published it in one of the most-read journals in the world. Scientists don’t willingly opt for obscure journals, of course: publishing in better-read journals isn’t easy because you’re competing with other papers for space, the journals’ editors often have a preference for more sensational work (or sensationalisable work, such as a paper co-authored by an older Nobel laureate; see here), and publishing fees can be prohibitively high. The story of Meghnad Saha, who was nominated for a Nobel Prize but didn’t win, offers an archetypal example. How journals have affected the composition of the scientific literature is a vast and therefore separate topic, but in short, they’ve played a big part to skew it in favour of some kinds of results over others – even if they’re all equally valuable as scientific contributions – and to favour authors from some parts of the world over others. Journals’ biases sit on top of those of universities and research groups.

4. Award fixation – The Nobel Prizes aren’t interested in interrogating the histories and social circumstances in which science (that it considers to be prize-worthy) happens; they simply fete what is. It’s we who must grapple with the consequences of our histories of science, particularly science’s relationship with colonialism, and make reparations. Fixating on winning a science Nobel Prize could also lock our research enterprise – and the public perception of that enterprise – into a paradigm that prefers individual winners. The large international collaboration is a good example: When physicists working with the LHC found the Higgs boson in 2012, two physicists who predicted the particle’s existence in 1964 won the corresponding Nobel Prize. Similarly, when scientists at the LIGO detectors in the US first observed gravitational waves in 2016, three physicists who conceived of LIGO in the 1970s won the prize. Yet the LHC and the LIGOs, and other similar instruments continue to make important contributions to science – directly, by probing reality, and indirectly by supporting research that can be adapted for other fields. One 2007 paper also found that Nobel Prizes have been awarded to inventions only 23% of the time. Does that mean we should just focus on discoveries? That’s a silly way of doing science.


The Nobel Prizes began as the testament of a wealthy Swedish man who was worried about his legacy. He started a foundation that put together a committee to select winners of some prizes every year, with some cash from the man’s considerable fortunes. Over the years, the committee made a habit of looking for and selecting some of the greatest accomplishments of science (but not all), so much so that the laureates’ standing in the scientific community created an aspiration to win the prize. Many prizes begin like the Nobel Prizes did but become irrelevant because they don’t pay enough attention to the relationship between the laureate-selecting process and the prize’s public reputation (note that the Nobel Prizes acquired their reputation in a different era). The Infosys Prize has elevated itself in this way whereas the Indian Science Congress’s prize has undermined itself. India or any Indian for that matter can institute an award that chooses its winners more carefully, and gives them lots of money (which I’m opposed to vis-à-vis senior scientists) to draw popular attention.

There are many reasons an Indian hasn’t won a science Nobel Prize in a while but it’s not the only prize worth winning. Let’s aspire to other, even better, ones.

New LHC data puts ‘new physics’ lead to bed

One particle in the big zoo of subatomic particles is the B meson. It has a very short lifetime once it’s created. In rare instances it decays to three lighter particles: a kaon, a lepton and an anti-lepton. There are many types of leptons and anti-leptons. Two are electrons/anti-electrons and muons/anti-muons. According to the existing theory of particle physics, they should be the decay products with equal probability: a B meson should decay to a kaon, electron and anti-electron as often as it decays to a kaon, muon and anti-muon (after adjusting for mass, since the muon is heavier).

In the last 13 years, physicists studying B meson decays had found on four occasions that it decayed to a kaon, electron and anti-electron more often. They were glad for it, in a way. They had worked out the existing theory, called the Standard Model of particle physics, from the mid-20th century in a series of Nobel Prize-winning papers and experiments. Today, it stands complete, explaining the properties of a variety of subatomic particles. But it still can’t explain what dark matter is, why the Higgs boson is so heavy or why there are three ‘generations’ of quarks, not more or less. If the Standard Model is old physics, particle physicists believe there could be a ‘new physics’ out there – some particle or force they haven’t discovered yet – which could really complete the Standard Model and settle the unresolved mysteries.

Over the years, they have explored various leads for ‘new physics’ in different experiments, but eventually, with more data, the findings have all been found to be in line with the predictions of the Standard Model. Until 2022, the anomalous B meson decays were thought to be a potential source of ‘new physics’ as well. A 2009 study in Japan found that some B meson decays created electron/anti-electrons pairs more often than muons/anti-muon pairs – as did a 2012 study in the US and a 2014 study in Europe. The last one involved the Large Hadron Collider (LHC), operated by the European Organisation for Nuclear Research (CERN) in France, and a detector on it called LHCb. Among other things, the LHCb tracks B mesons. In March 2021, the LHCb collaboration released data qualitatively significant enough to claim ‘evidence’ that some B mesons were decaying to electron/anti-electron pairs more often than to muon/anti-muon pairs.

But the latest data from the LHC, released on December 20, appears to settle the question: it’s still old physics. The formation of different types of lepton/anti-lepton particle pairs with equal probability is called lepton-flavour universality. Since 2009, physicists had been recording data that suggested that one type of some B meson decays were violating lepton-flavour university, in the form of a previously unknown particle or force acting on the decay process. In the new data, physicists analysed B meson decays in the current as well as one of two other pathways, and at two different energy levels – thus, as the official press release put it, “yielding four independent comparisons of the decays”. The more data there is to compare, the more robust the findings will be.

This data was collected over the last five years. Every time the LHC operates, it’s called a ‘run’. Each run generates several terabytes of data that physicists, with the help of computers, comb through in search of evidence for different hypotheses. The data for the new analysis was collected over two runs. And it led physicists to conclude that B mesons’ decay does not violate lepton-flavour universality. The Standard Model still stands and, perhaps equally importantly, a 13-year-old ‘new physics’ lead has been returned to dormancy.

The LHC is currently in its third run; scientists and engineers working with the machine perform maintenance and install upgrades between runs, so each new cycle of operations is expected to produce more as well as more precise data, leading to more high-precision analyses that could, physicists hope, one day reveal ‘new physics’.

Science’s humankind shield

“Science benefits all of humanity,” they say.

We need to reconsider where the notion that “science benefits all humans” comes from and whether it is really beneficial.

I was prompted to this after coming upon a short article in Sky & Telescope about the Holmdel Horn antenna in New Jersey being threatened by a local redevelopment plan. In the 1960s, Arno Penzias and Robert Wilson used the Holmdel Horn to record the first observational evidence of the cosmic microwave background, which is radiation leftover from – and therefore favourable evidence for – the Big Bang event. In a manner of speaking, then, the Holmdel Horn is an important part of the story of humans’ awareness of their place in the universe.

The US government designated the site of the antenna a ‘National Historic Landmark’ in 1989. On November 22, 2022, the Holmdel Township Committee nonetheless petitioned the planning board to consider redeveloping the locality where the antenna is located. According to the Sky & Telescope article, “If the town permits development of the site, most likely to build high-end residences, the Horn could be removed or even destroyed. The fact that it is a National Historic Landmark does not protect it. The horn is on private property and receives no Federal funds for its upkeep.” Some people have responded to the threat by suggesting that the Holmdel Horn be moved to the sprawling Green Bank Telescope premises in Virginia. This would separate it from the piece of land that can then be put to other use.

Overall, based on posts on Twitter, the prevailing sentiment appears to be that the Holmdel Horn antenna is a historic site worthy of preservation. One commenter, an amateur astronomer, wrote under the article:

“The Holmdel Horn Antenna changed humanity’s understanding of our place in the universe. The antenna belongs to all of humanity. The owners of the property, Holmdel Township, and Monmouth County have a historic responsibility to preserve the antenna so future generations can see and appreciate it.”

(I think the commenter meant “humankind” instead of “humanity”.)

The history of astronomy involved, and involves, thousands of antennae and observatories around the world. Even with an arbitrarily high threshold to define the ‘most significant’ discoveries, there are likely to be hundreds (if not more) of facilities that made them and could thus be deemed to be worthy of preservation. But should we really preserve all of them?

Astronomers, perhaps among all scientists, are likelier to be most keenly aware of the importance of land to the scientific enterprise. Land is a finite resource that is crucial to most, if not all, realms of the human enterprise. Astronomers experienced this firsthand when the Indigenous peoples of Hawai’i protested the construction of the Thirty Meter Telescope on Mauna Kea, leading to a long-overdue reckoning with the legacy of telescopes on this and other landmarks that are culturally significant to the locals, but whose access to these sites has come to be mediated by the needs of astronomers. In 2020, Nithyanand Rao wrote an informative article about how “astronomy and colonialism have a shared history”, with land and access to clear skies as the resources at its heart.


Also read:


One argument that astronomers arguing in favour of building or retaining these controversial telescopes have used is to claim that the fruits of science “belong to all of humankind”, including to the locals. This is dubious in at least two ways.

First, are the fruits really accessible to everyone? This doesn’t just mean the papers that astronomers publish based on work using these telescopes are openly and freely available. It also requires that the topics that astronomers work on need to be based on the consensus of all stakeholders, not just the astronomers. Also, who does and doesn’t get observation time on the telescope? What does the local government expect the telescope to achieve? What are the sorts of studies the telescope can and can’t support? Are the ground facilities equally accessible to everyone? There are more questions to ask, but I think you get the idea that claiming the fruits of scientific labour – at least astronomic labour – are available to everyone is disingenuous simply because there are many axes of exclusion in the instrument’s construction and operation.

Second, who wants a telescope? More specifically, what are the terms on which it might be fair for a small group of people to decide what “all of humankind” wants? Sure, what I’m proposing sounds comical – a global consensus mechanism just to make a seemingly harmless statement like “science benefits everyone” – but the converse seems equally comical: to presume benefits for everyone when in fact they really accrue to a small group and to rely on self-fulfilling prophecies to stake claims to favourable outcomes.

Given enough time and funds, any reasonably designed international enterprise, like housing development or climate financing, is likely to benefit humankind. Scientists have advanced similar arguments when advocating for building particle supercolliders: that the extant Large Hadron Collider (LHC) in Europe has led to advances in medical diagnostics, distributed computing and materials science, apart from confirming the existence of the Higgs boson. All these advances are secondary goals, at best, and justify neither the LHC nor its significant construction and operational costs. Also, who’s to say we wouldn’t have made these advances by following any other trajectory?

Scientists, or even just the limited group of astronomers, often advance the idea that their work is for everyone’s good – elevating it to a universally desirable thing, propping it up like a shield in the face of questions about whether we really need an expensive new experiment – whereas on the ground its profits are disseminated along crisscrossing gradients, limited by borders.

I’m inclined to harbour a similar sentiment towards the Holmdel Horn antenna in the US: it doesn’t belong to all of humanity, and if you (astronomers in the US, e.g.) wish to preserve it, don’t do it in my name. I’m indifferent to the fate of the Horn because I recognise that what we do and don’t seek to preserve is influenced by its significance as an instrument of science (in this case) as much as by ideas of national prestige and self-perception – and this is a project in which I have never had any part. A plaque installed on the Horn reads: “This site possesses national significance in commemorating the history of the United States of America.”

I also recognise the value of land and, thus, must acknowledge the significance of my ignorance of the history of the territory that the Horn currently occupies as well as the importance of reclaiming it for newer use. (I am, however, opposed in principle to the Horn being threatened by the prospect of “high-end residences” rather than affordable housing for more people.) Obviously others – most others, even – might feel differently, but I’m curious if a) scientists anywhere, other than astronomers, have ever systematically dealt with push-back along this line, and b) the other ways in which they defend their work at large when they can’t or won’t use the “benefits everyone” tack.

What arguments against the ‘next LHC’ say about funding Big Physics

A few days ago, a physicist (and PhD holder) named Thomas Hartsfield published a strange article in Big Think about why building a $100-billion particle physics machine like the Large Hadron Collider (LHC) is a bad idea. The article was so replete with errors things that even I – a not-physicist and not-a-PhD-holder – cringed reading them. I also wanted to blog about the piece but theoretical physicist Matthew Strassler beat me to it, with a straightforward post about the many ways in which Hartsfield’s article was just plain wrong, especially coming from a physicist. But I also think there were some things that Strassler either overlooked or left unsaid and which to my mind bear fleshing out – particularly points that have to do with the political economy of building research machines like the LHC. I also visit in the end the thing that really made me want to write this post, in response to a seemingly throwaway line in Strassler’s post. First, the problems that Hartsfield’s piece throws up and which deserve more attention:

1. One of Hartsfield’s bigger points in his article is that instead of spending $100 billion on one big physics project, we could spend it on 100,000 smaller projects. I agree with this view, sensu lato, that we need to involve more stakeholders than only physicists when contemplating the need for the next big accelerator or collider. However, in making the argument that the money can be redistributed, Hartsfield presumes that a) if a big publicly funded physics project is cancelled, the allocated money that the government doesn’t spend as a result will subsequently be diverted to other physics prohects, and b) this is all the money that we have to work with. Strassler provided the most famous example of the fallacy pertinent to (a): the Superconducting Super Collider in the US, whose eventually cancellation ‘freed’ an allocation of $4.4 billion, but the US government didn’t redirect this money back into other physics research grants. (b), on the other hand, is a more pernicious problem: a government allocating $100 billion for one project does not implicitly mean that it can’t spare $10 million for a different project, or projects. Realpolitik is important here. Politicians may contend that after having approved $100 billion for one project, it may not be politically favourable for them to return to Congress or Parliament or wherever with another proposal for $10 million. But on the flip side, both mega-projects and many physics research items are couched in arguments and aspirations to improve bilateral or multilateral ties (without vomiting on other prime ministers), ease geopolitical tensions, score or maintain research leadership, increase research output, generate opportunities for long-term technological spin-offs, spur local industries, etc. Put another way, a Big Science project is not just a science project; depending on the country, it could well be a national undertaking along the lines of the Apollo 11 mission. These arguments matter for political consensus – and axiomatically the research projects that are able to present these incentives are significantly different from those that aren’t, which in turn can help fund both Big Science and ‘Small Science’ projects at the same time. The possibility exists. For example, the Indian government has funded Gaganyaan separately from ISRO’s other activities. $100 billion isn’t all the money that’s available, and we should stop settling for such big numbers when they are presented to us.

2. These days, big machines like the one Hartsfield has erected as a “straw man” – to use Strassler words – aren’t built by individual countries. They are the product of an international collaboration, typically with dozens of governments, hundreds of universities and thousands of researchers participating. The funds allocated are also spent over many years, even decades. In this scenario, when a $100-billion particle collider is cancelled, no one entity in the whole world suddenly has that much money to give away at any given moment. Furthermore, in big collaborations, countries don’t just give money; often they add value by manufacturing various components, leasing existing facilities, sharing both human and material resources, providing loans, etc. The value of each of these contracts is added to the total value of the project. For example, India has been helping the LHC by manufacturing and supplying components related to the machine’s magnetic and cryogenic facilities. Let’s say India’s Departments of Science and Technology and of Atomic Energy had inked contracts with CERN, which hosts and maintains the LHC, worth $10 million to make and transport these components, but then the LHC had been called off just before its construction was to begin. Does this mean India would have had $10 million to give away to other science projects? Not at all! In fact, manufacturers within the country would have been bummed about losing the contracts.

3. Hartsfield doesn’t seem to acknowledge incremental results, results that improve the precision of prior measurements and results that narrow the range in which we can find a particle. Instead, he counts only singularly positive, and sensational, results – of which the LHC has had only one: the discovery of the Higgs boson in 2012. Take all of them together and the LHC will suddenly seem more productive. Simply put, precision-improving results are important because even a minute difference between the theoretically predicted value and the observed value could be a significant discovery that opens the door to ‘new physics’. We recently saw this with the mass of a subatomic particle called the W boson. Based on the data collected by a detector mounted on the Tevatron particle accelerator in Illinois, physicists found that the mass of the W boson differed from the predicted value by around 0.12%. This was sufficient to set off a tsunami of excitement and speculation in the particle physics community. (Hartsfield also overlooked an important fact and which Strassler caught: that the LHC collects a lot more data than physicists can process in a single year, which means that when the LHC winds down, physicists will still have many years of work left before they are done with the LHC altogether. This is evidently still happening with the Tevatron, which was shut down in 2011, so Hartsfield missing it is quite weird. Another thing that happened to Tevatron and is still happening with the LHC is that these machines are upgraded over time to produce better results.) Similarly, results that exclude the energy ranges in which a particle can be found are important because they tell us what kind of instruments we should build in future to detect the same particle. We obviously won’t need instruments that sweep the same energy range (nor will we have a guarantee that the particle will be found outside the excluded energy range – that’s a separate problem). There is another point to be made but which may not apply to CERN as much as to Big Science projects in other countries: one country’s research community building and operating a very large research facility signals to other countries that the researchers know what they’re doing and that they might be more deserving of future investments than other candidates with similar proposals. This is one of the things that India lost with the scuttling of the India-based Neutrino Observatory (the loss itself was deserved, to be sure).

Finally, the statement in Strassler’s post that piqued me the most:

My impression, from his writing and from what I can find online, is that most of what he knows about particle physics comes from reading people like Ethan Siegel and Sabine Hossenfelder. I think Dr. Hartsfield would have done better to leave the argument to them.

Thomas Hartsfield has clearly done a shoddy job in his article in the course of arguing against a Big Physics machine like LHC in the future, but his screwing up doesn’t mean discussions on the need for the next big collider should be left to physicists. I admit that Strassler’s point here was probably limited to the people whose articles and videos were apparently Hartsfield’s primary sources of information – but it also seemed to imply that instead of helping those who get things wrong do better next time, it’s okay to ask them to not try again and instead leave the communication efforts to their primary sources. That’s Ethan Siegel and Sabine Hossenfelder in this case – both prolific communicators – but in many instances, bad articles are written by writers who bothered to try while their sources weren’t doing more or better to communicate to the people at large. This is also why it bears repeating that when it comes to determining the need for a Big Physics project of the likes of the LHC, physics is decidedly one non-majority part of it and that – importantly – science communicators also have an equally vital role to play. Let me quote here from an article by physicist Nirmalya Kajuri, published in The Wire Science in February 2019:

… the few who communicate science can have a lopsided influence on the public perception of an entire field – even if they’re not from that field. The distinction between a particle physicist and, say, a condensed-matter physicist is not as meaningful to most people reading the New York Times or any other mainstream publication as it is to physicists. There’s no reason among readers to exclude [one physicist] as an expert.

However, very few physicists engage in science communication. The extreme ‘publish or perish’ culture that prevails in sciences means that spending time in any activity other than research carries a large risk. In some places, in fact, junior scientists spending time popularising science are frowned upon because they’re seen to be spending time on something unproductive.

All physicists agree that we can’t keep building colliders ad infinitum. They differ on when to quit. Now would be a good time, according to Hossenfelder. Most particle physicists don’t think so. But how will we know when we’ve reached that point? What are the objective parameters here? These are complex questions, and the final call will be made by our ultimate sponsors: the people.

So it’s a good thing that this debate is playing out before the public eye. In the days to come, physicists and non-physicists must continue this dialogue and find mutually agreeable answers. Extensive, honest science communication will be key.

So more physicists should join in the fray, as should science journalists, writers, bloggers and communicators in general. Just that they should also do better than Thomas Hartsfield to get the details right.

On tabletop accelerators

Tabletop accelerators are an exciting new field of research in which physicists use devices the size of a shoe box, or something just a bit bigger, to accelerate electrons to high energies. The ‘conventional way’ to do this has been to use machines that are as big as small buildings, but are often bigger as well. The world’s biggest machine, the Large Hadron Collider (LHC), uses thousands of magnets, copious amounts of electric current, sophisticated control systems and kilometres of beam pipes to accelerate protons from 0.09 TeV – their rest energy – to 7 TeV. Tabletop accelerators can’t push electrons to such high energies, required to probe exotic quantum phenomena, but they can attain energies that are useful in medical applications (including scanners and radiation therapy).

They do this by skipping the methods that ‘conventional’ accelerators use, and instead take advantage of decades of progress in theoretical physics, computer simulations and fabrication. For example, some years ago, there was a group at Stanford University that had developed an accelerator that could sit on your fingertip. It consisted of narrow channels etched on glass, and a tuned infrared laser shined over these ‘mountains’ and ‘valleys’. When an electron passed over a mountain, it would get pushed more than it would slow down over a valley. This way, the group reported an acceleration gradient – amount of acceleration per unit distance – of 300 MV/m. This means the electrons will gain 300 MeV of energy for every meter travelled. This was comparable to some of the best, but gigantic, electron accelerators.

Another type of tabletop accelerators uses a clump of electrons or a laser fired into a plasma, setting off a ripple of energy that the trailing electrons, from the plasma, can ‘ride’ and be accelerated on. (This is a grossly simplified version; a longer explanation is available here.) In 2016, physicists in California proved that it would be possible to join two such accelerators end to end and accelerate the electrons more – although not twice as more, since there is a cost associated with the plasma’s properties.

The biggest hurdle between tabletop accelerators and the market is also something that makes the label of ‘tabletop’ meaningless. Today, just the part of the device where electrons accelerate can fit on a tabletop. The rest of the machine is still too big. For example, the team behind the 2016 study realised that they’d need as many of their shoebox-sized devices as to span 100 m to accelerate electrons to 0.1 TeV. In early 2020, the Stanford group improved their fingertip-sized accelerator to make it more robust and scalable – but such that the device’s acceleration gradient dropped 10x and it required pre-accelerated electrons to work. The machines required for the latter are as big as rooms.

More recently, Physics World published an article on July 12 headlined ‘Table-top laser delivers intense extreme-ultraviolet light’. In the fifth paragraph, however, we find that this table needs to be around 2 m long. Is this an acceptable size for a table? I don’t want to discriminate against bigger tables but I thought ‘tabletop accelerator’ meant something like my study table (pictured above). This new device’s performance reportedly “exceeds the performance of existing, far bulkier XUV sources”, that “simulations done by the team suggest that further improvements could boost [its output] intensity by a factor of 1000,” and that it shrinks something that used to be 10 m wide to a fifth of its size. These are all good, but if by ‘tabletop’ we’re to include banquet-hall tables as well, the future is already here.

US experiments find hint of a break in the laws of physics

At 9 pm India time on April 7, physicists at an American research facility delivered a shot in the arm to efforts to find flaws in a powerful theory that explains how the building blocks of the universe work.

Physicists are looking for flaws in it because the theory doesn’t have answers to some questions – like “what is dark matter?”. They hope to find a crack or a hole that might reveal the presence of a deeper, more powerful theory of physics that can lay unsolved problems to rest.

The story begins in 2001, when physicists performing an experiment in Brookhaven National Lab, New York, found that fundamental particles called muons weren’t behaving the way they were supposed to in the presence of a magnetic field. This was called the g-2 anomaly (after a number called the gyromagnetic factor).

An incomplete model

Muons are subatomic and can’t be seen with the naked eye, so it could’ve been that the instruments the physicists were using to study the muons indirectly were glitching. Or it could’ve been that the physicists had made a mistake in their calculations. Or, finally, what the physicists thought they knew about the behaviour of muons in a magnetic field was wrong.

In most stories we hear about scientists, the first two possibilities are true more often: they didn’t do something right, so the results weren’t what they expected. But in this case, the physicists were hoping they were wrong. This unusual wish was the product of working with the Standard Model of particle physics.

According to physicist Paul Kyberd, the fundamental particles in the universe “are classified in the Standard Model of particle physics, which theorises how the basic building blocks of matter interact, governed by fundamental forces.” The Standard Model has successfully predicted the numerous properties and behaviours of these particles. However, it’s also been clearly wrong about some things. For example, Kyberd has written:

When we collide two fundamental particles together, a number of outcomes are possible. Our theory allows us to calculate the probability that any particular outcome can occur, but at energies beyond which we have so far achieved, it predicts that some of these outcomes occur with a probability of greater than 100% – clearly nonsense.

The Standard Model also can’t explain what dark matter is, what dark energy could be or if gravity has a corresponding fundamental particle. It predicted the existence of the Higgs boson but was off about the particle’s mass by a factor of 100 quadrillion.

All these issues together imply that the Standard Model is incomplete, that it could be just one piece of a much larger ‘super-theory’ that works with more particles and forces than we currently know. To look for these theories, physicists have taken two broad approaches: to look for something new, and to find a mistake with something old.

For the former, physicists use particle accelerators, colliders and sophisticated detectors to look for heavier particles thought to exist at higher energies, and whose discovery would prove the existence of a physics beyond the Standard Model. For the latter, physicists take some prediction the Standard Model has made with a great degree of accuracy and test it rigorously to see if it holds up. Studies of muons in a magnetic field are examples of this.

According to the Standard Model, a number associated with the way a muon swivels in a magnetic field is equal to 2 plus 0.00116591804 (with some give or take). This minuscule addition is the handiwork of fleeting quantum effects in the muon’s immediate neighbourhood, and which make it wobble. (For a glimpse of how hard these calculations can be, see this description.)

Fermilab result

In the early 2000s, the Brookhaven experiment measured the deviation to be slightly higher than the model’s prediction. Though it was small – off by about 0.00000000346 – the context made it a big deal. Scientists know that the Standard Model has a habit of being really right, so when it’s wrong, the wrongness becomes very important. And because we already know the model is wrong about other things, there’s a possibility that the two things could be linked. It’s a potential portal into ‘new physics’.

“It’s a very high-precision measurement – the value is unequivocal. But the Standard Model itself is unequivocal,” Thomas Kirk, an associate lab director at Brookhaven, had told Science in 2001. The disagreement between the values implied “that there must be physics beyond the Standard Model.”

This is why the results physicists announced today are important.

The Brookhaven experiment that ascertained the g-2 anomaly wasn’t sensitive enough to say with a meaningful amount of confidence that its measurement was really different from the Standard Model prediction, or if there could be a small overlap.

Science writer Brianna Barbu has likened the mystery to “a single hair found at a crime scene with DNA that didn’t seem to match anyone connected to the case. The question was – and still is – whether the presence of the hair is just a coincidence, or whether it is actually an important clue.”

So to go from ‘maybe’ to ‘definitely’, physicists shipped the 50-foot-wide, 15-tonne magnet that the Brookhaven facility used in its Muon g-2 experiment to Fermilab, the US’s premier high-energy physics research facility in Illinois, and built a more sensitive experiment there.

The new result is from tests at this facility: that the observation differs from the Standard Model’s predicted value by 0.00000000251 (give or take a bit).

The Fermilab results are expected to become a lot better in the coming years, but even now they represent an important contribution. The statistical significance of the Brookhaven result was just below the threshold at which scientists could claim evidence but the combined significance of the two results is well above.

Potential dampener

So for now, the g-2 anomaly seems to be real. It’s not easy to say if it will continue to be real as physicists further upgrade the Fermilab g-2’s performance.

In fact there appears to be another potential dampener on the horizon. An independent group of physicists has had a paper published today saying that the Fermilab g-2 result is actually in line with the Standard Model’s prediction and that there’s no deviation at all.

This group, called BMW, used a different way to calculate the Standard Model’s value of the number in question than the Fermilab folks did. Aida El-Khadra, a theoretical physicist at the University of Illinois, told Quanta that the Fermilab team had yet to check BMW’s approach, but if it was found to be valid, the team would “integrate it into its next assessment”.

The ‘Fermilab approach’ itself is something physicists have worked with for many decades, so it’s unlikely to be wrong. If the BMW approach checks out, it’s possible according to Quanta that just the fact that two approaches lead to different predictions of the number’s value is likely to be a new mystery.

But physicists are excited for now. “It’s almost the best possible case scenario for speculators like us,” Gordan Krnjaic, a theoretical physicist at Fermilab who wasn’t involved in the research, told Scientific American. “I’m thinking much more that it’s possibly new physics, and it has implications for future experiments and for possible connections to dark matter.”

The current result is also important because the other way to look for physics beyond the Standard Model – by looking for heavier or rarer particles – can be harder.

This isn’t simply a matter of building a larger particle collider, powering it up, smashing particles and looking for other particles in the debris. For one, there is a very large number of energy levels at which a particle might form. For another, there are thousands of other particle interactions happening at the same time, generating a tremendous amount of noise. So without knowing what to look for and where, a particle hunt can be like looking for a very small needle in a very large haystack.

The ‘what’ and ‘where’ instead come from different theories that physicists have worked out based on what we know already, and design experiments depending on which one they need to test.

Into the hospital

One popular theory is called supersymmetry: it predicts that every elementary particle in the Standard Model framework has a heavier partner particle, called a supersymmetric partner. It also predicts the energy ranges in which these particles might be found. The Large Hadron Collider (LHC) in CERN, near Geneva, was powerful enough to access some of these energies, so physicists used it and went looking last decade. They didn’t find anything.

A table showing searches for particles associated with different post-standard-model theories (orange labels on the left). The bars show the energy levels up to which the ATLAS detector at the Large Hadron Collider has not found the particles. Table: ATLAS Collaboration/CERN

Other groups of physicists have also tried to look for rarer particles: ones that occur at an accessible energy but only once in a very large number of collisions. The LHC is a machine at the energy frontier: it probes higher and higher energies. To look for extremely rare particles, physicists explore the intensity frontier – using machines specialised in generating collisions.

The third and last is the cosmic frontier, in which scientists look for unusual particles coming from outer space. For example, early last month, researchers reported that they had detected an energetic anti-neutrino (a kind of fundamental particle) coming from outside the Milky Way participating in a rare event that scientists predicted in 1959 would occur if the Standard Model is right. The discovery, in effect, further cemented the validity of the Standard Model and ruled out one potential avenue to find ‘new physics’.

This event also recalls an interesting difference between the 2001 and 2021 announcements. The late British scientist Francis J.M. Farley wrote in 2001, after the Brookhaven result:

… the new muon (g-2) result from Brookhaven cannot at present be explained by the established theory. A more accurate measurement … should be available by the end of the year. Meanwhile theorists are looking for flaws in the argument and more measurements … are underway. If all this fails, supersymmetry can explain the data, but we would need other experiments to show that the postulated particles can exist in the real world, as well as in the evanescent quantum soup around the muon.

Since then, the LHC and other physics experiments have sent supersymmetry ‘to the hospital’ on more than one occasion. If the anomaly continues to hold up, scientists will have to find other explanations. Or, if the anomaly whimpers out, like so many others of our time, we’ll just have to put up with the Standard Model.

Featured image: A storage-ring magnet at Fermilab whose geometry allows for a very uniform magnetic field to be established in the ring. Credit: Glukicov/Wikimedia Commons, CC BY-SA 4.0.

The Wire Science
April 8, 2021

The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

Where is the coolest lab in the universe?

The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.

A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.

Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:

This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.

The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.

The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.

This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.

While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.

An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.

The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).

This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.

(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)

The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.

Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.

All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.

Peter Higgs, self-promoter

I was randomly rewatching The Big Bang Theory on Netflix today when I spotted this gem:

Okay, maybe less a gem and more a shiny stone, but still. The screenshot, taken from the third episode of the sixth season, shows Sheldon Cooper mansplaining to Penny the work of Peter Higgs, whose name is most famously associated with the scalar boson the Large Hadron Collider collaboration announced the discovery of to great fanfare in 2012.

My fascination pertains to Sheldon’s description of Higgs as an “accomplished self-promoter”. Higgs, in real life, is extremely reclusive and self-effacing and journalists have found him notoriously hard to catch for an interview, or even a quote. His fellow discoverers of the Higgs boson, including François Englert, the Belgian physicist with whom Higgs won the Nobel Prize for physics in 2013, have been much less media-shy. Higgs has even been known to suggest that a mechanism in particle physics involving the Higgs boson should really be called the ABEGHHK’tH mechanism, include the names of everyone who hit upon its theoretical idea in the 1960s (Philip Warren Anderson, Robert Brout, Englert, Gerald Guralnik, C.R. Hagen, Higgs, Tom Kibble and Gerardus ‘t Hooft) instead of just as the Higgs mechanism.

No doubt Sheldon thinks Higgs did right by choosing not to appear in interviews for the public or not writing articles in the press himself, considering such extreme self-effacement is also Sheldon’s modus of choice. At the same time, Higgs might have lucked out and be recognised for work he conducted 50 years prior probably because he’s white and from an affluent country, both of which attributes nearly guarantee fewer – if any – systemic barriers to international success. Self-promotion is an important part of the modern scientific endeavour, as it is with most modern endeavours, even if one is an accomplished scientist.

All this said, it is notable that Higgs was also a conscientious person. When he was awarded the Wolf Prize in 2004 – a prestigious award in the field of physics – he refused to receive it in person in Jerusalem because it was a state function and he has protested Israel’s war against Palestine. He was a member of the Campaign for Nuclear Disarmament until the group extended its opposition to nuclear power as well; then he resigned. He also stopped supporting Greenpeace after they become opposed to genetic modification. If it is for these actions that Sheldon deemed Higgs an “accomplished self-promoter”, then I stand corrected.

Featured image: A portrait of Peter Higgs by Lucinda Mackay hanging at the James Clerk Maxwell Foundation, Edinburgh. Caption and credit: FF-UK/Wikimedia Commons, CC BY-SA 4.0.