A quantum theory of consciousness

We seldom have occasion to think about science and religion at the same time, but the most interesting experience I have had doing that came in October 2018, when I attended a conference called ‘Science for Monks’* in Gangtok, Sikkim. More precisely, it was one edition of a series of conferences by that name, organised every year between scientists and science communicators from around the world and Tibetan Buddhist monks in the Indian subcontinent. Let me quote from the article I wrote after the conference to illustrate why such engagement could be useful:

“When most people think about the meditative element of the practice of Buddhism, … they think only about single-point meditation, which is when a practitioner closes their eyes and focuses their mind’s eye on a single object. The less well known second kind is analytical meditation: when two monks engage in debate and question each other about their ideas, confronting them with impossibilities and contradictions in an effort to challenge their beliefs. This is also a louder form of meditation. [One monk] said that sometimes, people walk into his monastery expecting it to be a quiet environment and are surprised when they chance upon an argument. Analytical meditation is considered to be a form of evidence-sharpening and a part of proof-building.”

As interesting as the concept of the conference is, the 2018 edition was particularly so because the field of science on the table that year was quantum physics. That quantum physics is counter-intuitive is a banal statement; it is chock-full of twists in the tale, interpretations, uncertainties and open questions. Even a conference among scientists was bound to be confusing – imagine the scope of opportunities for confusion in one between scientists and monks. As if in response to this risk, the views of the scientists and the monks were very cleanly divided throughout the event, with neither side wanting to tread on the toes of the other, and this in turn dulled the proceedings. And while this was a sensible thing to do, I was disappointed.

This said, there were some interesting conversations outside the event halls, in the corridors, over lunch and dinner, and at the hotel where we were put up (where speakers in the common areas played ‘Om Mani Padme Hum’ 24/7). One of them centered on the rare (possibly) legitimate idea in quantum physics in which Buddhist monks, and monks of every denomination for that matter, have considerable interest: the origin of consciousness. While any sort of exposition or conversation involving the science of consciousness has more often than not been replete with bad science, this idea may be an honourable exception.

Four years later, I only remember that there was a vigorous back-and-forth between two monks and a physicist, not the precise contents of the dialogue or who participated. The subject was the Orch OR hypothesis advanced by the physicist Roger Penrose and quantum-consciousness theorist Stuart Hameroff. According to a 2014 paper authored by the pair, “Orch OR links consciousness to processes in fundamental space-time geometry.” It traces the origin of consciousness to cellular structures inside neurons called microtubules being in a superposition of states, and which then collapse into a single state in a process induced by gravity.

In the famous Schrödinger’s cat thought-experiment, the cat exists in a superposition of ‘alive’ and ‘dead’ states while the box is closed. When an observer opens the box and observes the cat, its state collapses into either a ‘dead’ or an ‘alive’ state. Few scientists subscribe to the Orch OR view of self-awareness; the vast majority believe that consciousness originates not within neurons but in the interactions between neurons, happening at a large scale.

‘Orch OR’ stands for ‘orchestrated objective reduction’, with Penrose being credited with the ‘OR’ part. That is also the part at which mathematicians and physicists have directed much of their criticism.

It begins with Penrose’s idea of spacetime blisters. According to him, at the Planck scale (around 10-35 m), the spacetime continuum is discrete, not continuous, and that each quantum superposition occupies a distinct piece of the spacetime fabric. These pieces are called blisters. Pernose postulated that gravity acts on each of these blisters and destabilises them, causing the superposed states to collapse into a single state.

A quantum computer performs calculations using qubits as the fundamental units of information. The qubits interact with each other in quantum-mechanical processes like superposition and entanglement. At some point, the superposition of these qubits is forced to collapse by making an observation, and the state to which it collapses is recorded as the computer’s result. In 1989, Penrose proposed that there could be a quantum-computer-like mechanism operating in the human brain and that the OR mechanism could be the act of observation that forces it to terminate.

One refinement of the OR hypothesis is the Diósi-Penrose scheme, with contributions from Hungarian physicist Lajos Diósi. In this scheme, spacetime blisters are unstable and the superposition collapses when the mass of the superposed states exceeds a fixed value. In the course of his calculations, Diósi found that at the moment of collapse, the system must emit some electromagnetic radiation (due to the motion of electrons).

Hameroff made his contribution by introducing microtubules as a candidate for the location of qubit-like objects and which could collectively set up a quantum-computer-like system within the brain.

There have been some experiments in the last two decades that have tested whether Orch OR could manifest in the brain, based on studies of electron activity. But a more recent study suggests that Orch OR may just be infeasible as an explanation for the origin of consciousness.

Here, a team of researchers – including Lajos Diósi – first looked for the electromagnetic radiation at the instant the superposition collapsed. The researchers didn’t find any, but the parameters of their experiment (including the masses involved) allowed them to set lower limits on the scale at which Orch OR might work. That is, they had a way to figure out a way in which the distance, time and mass might be related in an Orch OR event.

They set these calculations out in a new paper, published in the journal Physics of Life Reviews on May 17. According to their paper, they fixed the time-scale of the collapse to 0.025 to 0.5 seconds, which is comparable to the amount of time in which our brain recognises conscious experience. They found that at a spatial scale of 10-15 m – which Penrose has expressed a preference for – a superposition that collapses in 0.025 seconds would require 1,000-times more tubulins as there are in the brain (1020), an impossibility. (Tubulins polymerise to form microtubules.) But at a scale of around 1 nm, the researchers worked out that the brain would need only 1012 tubulins for their superposition to collapse in around 0.025 seconds. This is still a very large number of tubulins and a daunting task even for the human brain. But it isn’t impossible as with the collapse over 10-15 m. According to the team’s paper,

The Orch OR based on the DP [Diósi-Penrose] theory is definitively ruled out for the case of [10-15 m] separation, without needing to consider the impact of environmental decoherence; we also showed that the case of partial separation requires the brain to maintain coherent superpositions of tubulin of such mass, duration, and size that vastly exceed any of the coherent superposition states that have been achieved with state-of-the-art optomechanics and macromolecular interference experiments. We conclude that none of the scenarios we discuss … are plausible.

However, the team hasn’t nearly eliminated Orch OR; instead, they wrote that they intend to refine the Diósi-Penrose scheme to a more “sophisticated” version that, for example, may not entail the release of electromagnetic radiation or provide a more feasible pathway for superposition collapse. So far, in their telling, they have used experimental results to learn where their theory should improve if it is to remain a plausible description of reality.

If and when the ‘Science for Monks’ conferences, or those like it, resume after the pandemic, it seems we may still be able to put Orch OR on the discussion table.

* I remember it was called ‘Science for Monks’ in 2018. Its name appears to have been changed since to ‘Science for Monks and Nuns’.

Analysis Culture

Unless the West copies us, we’re irrelevant

We have become quite good at dismissing the more asinine utterances of our ministers and other learned people in terms of either a susceptibility to pseudoscience or, less commonly, a wilful deference to what we might call pseudoscientific ideas in order to undermine “Western science” and its influence. But when a matter of this sort hits the national headlines, our response seems for the large part to be limited to explaining the incident: once some utterance has been diagnosed, it apparently stops being of interest.

While this is understandable, an immediate diagnosis can only offer so much insight. An important example is the Vedas. Every time someone claims that the Vedas anticipated, say, the Higgs boson or interplanetary spaceflight, the national news machine – in which reporters, editors, experts, commentators, activists and consumers all participate – publishes the following types of articles, from what I have read: news reports that quote the individual’s statement as is, follow-ups with the individual asking them to explain themselves, opinion articles defending or trashing the individual, an editorial if the statement is particularly pernicious, opinion articles dissecting the statement, and perhaps an interview long after to ask the individual what they were really thinking. (I don’t follow TV news but I assume it is either not very different in its content.)

All of these articles employ a diagnostic attitude towards the news item: they seek to uncover the purpose of the statement because they begin with the (reasonable) premise that the individual was not a fool to issue it and that the statement had a purpose, irrespective of whether it was fulfilled. Only few among them – if any – stop consider the double-edged nature of the diagnosis itself. For example, when a researcher in Antarctica got infected by the novel coronavirus, their diagnosis would have said a lot about humankind – in their ability to be infected even when one individual is highly isolated for long periods of time – as well as about the virus itself.

Similarly, when a Bharatiya Janata Party bhakt claims that the Vedas anticipated the discovery of the Higgs boson, it says as much about the individual as it does about the individual’s knowledge of the Vedas. Specifically, the biggest loser here, so to speak, are the Vedas, which have been misrepresented to the world’s scientists to sound like an unfalsifiable joke-book. Extrapolate this to all of the idiotic things that our most zealous compatriots have said about airplanes, urban planning, the internet, plastic surgery, nutrition and diets, cows, and mathematics.

This is misrepresentation en masse of India’s cultural heritage (the cows aren’t complaining but I will never be certain until they can talk), and it is also a window into what these individuals believe to be true about the country itself.

For example, consider mathematics. One position paper drafted by the Karnataka task force on the National Education Policy, entitled “Knowledge in India”, called the Pythagorean theorem “fake news” simply because the Indian scholar Baudhayana had propounded very similar rules and observations. In an interview to Hindustan Times interview yesterday, the head of this task force, Madan Gopal, said the position paper doesn’t recommend that the theorem be removed from the syllabus but that an addition be made: Baudhayana was the originator of the theorem. Baudhayana was not the originator, but equally importantly, Gopal said he had concluded that Baudhayana was being cheated out of credit based on what Gopal had read… on Quora.

As a result, Gopal has overlooked and rendered invisible the Baudhayana Sulbasutra as well as has admitted his indifference towards the programme of its study and preservation.

Consider another example involving the same fellow: Gopal also told Hindustan Times, “Manchester University published a paper saying that the theory of Newton is copied from ancient texts from Kerala.” He is in all likelihood referring to the work of G.G. Joseph, who asserted in 2007 that scholars of the Kerala school of mathematics had discovered some of the constitutive elements of calculus in c. 1350 – a few centuries before Isaac Newton or Gottfried Leibniz. However, Gopal is wrong to claim that Newton “copied” from the work from “ancient texts from Kerala”: in continuation of his work, Joseph discovered that while the work of Madhava and Nilakantha at the Kerala school pre-dated that of Newton and Leibniz, there had been no transfer of knowledge from the Kerala school to Europe in the medieval era. That is, Newton and Leibniz had discovered calculus independently.

Gopal would have been right to state that Madhava and Nilakantha were ahead of the Europeans of the time, but it’s not clear whether Gopal was even aware of these names or the kind of work in which the members of the Kerala school were engaged. He has as a result betrayed his ignorance as well as squandered an important opportunity to address the role of colonialism and imperialism in the history of mathematics. In fact, Gopal seems to say that unless Newton copied from the “ancient texts,” what the texts themselves record is irrelevant. (Also read: ‘We don’t have a problem with the West, we’re just obsessed with it’.)

Now, Madan Gopal’s ignorance may not amount to much – although the Union education ministry will be using the position papers as guidance to draft the next generation of school curricula. So let us consider, in the same spirit and vein, Narendra Modi’s claim shortly after he became India’s prime minister for the first time that ancient Indians had been capable of performing an impossible level of plastic surgery. In that moment, he lied – and he also admitted that he had no idea what the contents of the Sushruta Samhita or the Charaka Samhita were and that he didn’t care. He admitted that he wouldn’t be investing in the study, preservation and transmission of these texts because that would be tantamount to admitting that only a vanishing minority is aware of their contents. Also, why do these things and risk finding out that the texts say something else entirely?

Take all of the party supporters’ pseudoscientific statements together – originating from the Madan Gopals and culminating with Modi – and it becomes quite apparent, beyond the momentary diagnoses of each of these statements, that while we already knew that they have no idea what they are talking about, we must admit that they have no care for what the purported sources of their claims actually say. That is, they don’t give a damn about the actual Vedas, the actual Samhitas or the various actual sutras, and they are unlikely to preserve or study these objects of our heritage in their original forms.

Just as every new Patanjali formulation forgets Ayurveda for the sake of Ayurveda®, every new utterance about Ancient Indian Knowledge forgets the Vedas for the sake of the Vedas®.

Now, given the statements of this nature from ministers, other members and unquestioning supporters of the BJP, we have reason to believe that they engage in kettle logic. This in turn implies that these individuals may not really believe what they are saying to be true and/or valid, and that they employ their arguments anyway only to ensure the outcome, on which they are fixated. That is, the foolish statements may not implicitly mean that their authors are foolish; on the contrary, they may be smart enough to recognise kettle logic as well as its ability to keep naïve fact-checkers occupied in a new form of the bullshit job. Even so, they must be aware at least that they are actively forgetting the Vedas, the Samhitas and the sutras.

One way or another, the BJP seems to say, let’s forget.

Life notes Science

JWST and the sorites paradox

The team operating NASA’s James Webb Space Telescope (JWST) released its first full-colour image early on July 12, and has promised some more from the same set in the evening. The image is a near-infrared shot of the SMACS 0723 galaxy cluster some 4.6 billion lightyears away. According to a press release accompanying the image’s release, the field of view – which shows scores of galaxies as well as several signs of gravitational lensing (which is evident only when very large distances are involved) – is equivalent to the area occupied by a grain of sand held at arm’s length from the eyes.

I’m personally looking forward to the telescope’s shot of the Carina Nebula: the Hubble space telescope’s images of this emission nebula were themselves stunning, so the JWST’s shot should be more so!

Gazing at the JWST’s first image brought to my mind the sorites paradox. Its underlying thought-experiment might resonate with you were you to ponder the classical limit of quantum physics or the concept of emergence as Philip Warren Anderson elucidated it as well. Imagine a small heap of sand before you. You pick up a single grain from the heap and toss it away. Is the sand before you still in a heap? Yes. You put away another grain and check. Still a heap. So you keep going, and a few thousand checks later, you find that you have before you a single grain of sand. Is it still a heap? If your answer is ‘yes’, the follow-up question arises: how can a single grain of sand be a heap? If ‘no’, then when did the heap stop being a heap?

Another way to conjure the same paradox is to start with one grain of sand and which is evidently not a heap. Then you add one more grain, which is also not a heap, and one more and one more and so forth. Using modus ponens supplies the following line of reasoning: “One mote isn’t a heap. And if one mote isn’t a heap, then two motes don’t make a heap either. And three motes don’t make a heap either. And so on until: if 9,999 motes don’t make a heap, then 10,000 motes don’t make a heap either.” But while straightforward logic has led you to this conclusion, your sense-experience is clear: what lies before you is in fact a heap.

The paradox came to mind because it’s hard not to contemplate the fact that both the photograph and the goings-on in India at the moment – from the vitriolic bigotry that’s constantly being mainstreamed to the arrest and harassment of journalists, activists and other civilians, both by the ruling dispensation – are the product of human endeavour. I’m not interested in banal expressions of the form “we’re all in this together” (we’re not) or “human intelligence and ingenuity can always be put to better use” (this is useless knowledge); instead, I wonder what the spectrum of human actions – which personal experience has indicated repeatedly to be continuous and ultimately ergodic – looks like that encompasses, at two extremes, actions of such beauty and of such ugliness. When does beauty turn to ugliness?

Or are these terms discernible only in absolutes – that is, that there is no lesser or higher beauty (or ugliness) but only one ultimate form, and that like the qubits of a quantum computer, between ultimate beauty and ultimate ugliness there are some indeterminate combinations of each attribute for which we have no name or understanding?

I use ‘beauty’ here to mean that which is deemed worthy of preservation and ‘ugliness’, of erasure. The sorites paradox is a paradox because of the vague predicates: ‘heap’, for example, has no quantitative definition. Similarly, I realise I’m setting up vague, as well as subjective, predicates when I set up beauty and preservation in the way that I have, so let me simplify the question: how do I, how do you, how do we reconcile the heap of sand that is the awesome deep-field shot of a distant galaxy cluster with the single grain of sand that is the contemporary political reality of India? Is a reconciliation even possible – that is, is there still a continuous path of thought, aspiration and action that could take a people seeped in hate and violence to a place of peaceability, tolerance and openness? Or have we fundamentally and irredeemably lost a part of ourselves that has turned us non-ergodic, that will keep us now and for ever from experiencing certain forms of beauty?

Language and the words that we use about ourselves will play a very important part here – the adjectives we save for ourselves versus those for the people or ideas that offend us, the terms in which we conceive of and describe our actions, everything from the order of words of our shortest poems to that of jargon of our courts’ longest judgments. Our words help us to convince ourselves, and others, that there is beauty in something even if it isn’t readily apparent. A bhakt might find in the annals of OpIndia and The Organiser the same solace and inspiration, and therefore the virtue of preserving what he finds to be beautiful, that a rational progressivist might find in Salvage or Viewpoint. This is among other things because language is how we map meaning to experience – the first point of contact between the material realm and human judgment, an interaction that will forever colour every moral, ethic and justicial conclusion to come after.

This act of meaning-making is also visible in physics, where there are overlapping names for different parts of the electromagnetic spectrum because the names matter more for the frequencies’ effects on the human body. Similarly, in the book trade, genre definitions can be overlapping – The Three-Body Problem by Cixin Liu is both sci-fi and fantasy, for example – because they matter largely for marketing.

One way or another, I’m eager, but not yet desperate, for an answer that will keep the door open for some measure of reversibility – and not for the bhakts but for those engaged in pushing back against their ilk. (The bhakts can go to hell.) The cognitive dissonance otherwise – of a world that creates things and ideas worth preserving and of a world that creates things and ideas worth erasing – might just break my ability to be optimistic about the human condition.

Featured image: The JWST’s image of the SMACS 0723 galaxy cluster. Credit: NASA, ESA, CSA and STScI.

Life notes

The Higgs boson and I

My first byline as a professional journalist (a.k.a. my first byline ever) was oddly for a tech story – about the advent of IPv6 internet addresses. I started writing it after 7 pm, had to wrap it up by 9 pm and it was published in the paper the next day (I was at The Hindu).

The first byline that I actually wanted to take credit for appeared around a month later, on July 4, 2012 – ten years ago – on the discovery of the Higgs boson at the Large Hadron Collider (LHC) in Europe. I published a live blog as Fabiola Gianotti, Joe Incandela and Rolf-Dieter Heuer, the spokespersons of the ATLAS and CMS detector collaborations and the director-general of CERN, respectively, announced and discussed the results. I also distinctly remember taking a pee break after telling readers “I have to leave my desk for a minute” and receiving mildly annoyed, but also amused, comments complaining of TMI.

After the results had been announced, the science editor, R. Prasad, told me that R. Ramachandran (a.k.a. Bajji) was filing the main copy and that I should work around that. So I wrote a ‘what next’ piece describing the work that remained for physicists to do, including open problems in particle physics that stayed open and the alternative theories, like supersymmetry, required to explain them. (Some jingoism surrounding the lack of acknowledgment for S.N. Bose – wholly justifiable, in my view – also forced me to write this.)

I also remember placing a bet with someone that the Nobel Prize for physics in 2012 wouldn’t be awarded for the discovery (because I knew, but the other person didn’t, that the nominations for that year’s prizes had closed by then).

To write about the feats and mysteries of particle physics is why I became a science journalist, so the Higgs boson’s discovery being announced a month after I started working was special – not least because it considerably eased the amount of effort I had to put in to pitches and have them accepted (specifically, I didn’t have to spend too much time or effort spelling out why a story was important). It was also a great opportunity for me to learn about how breaking news is reported as well as accelerated my induction into the newsroom and its ways.

But my interest in particle physics has since waned, especially from around 2017, as I began to focus in my role as science editor of The Wire (which I cofounded/joined in May 2015) on other areas of science as well. My heart is still with physics, and I have greatly enjoyed writing the occasional article about topological phases, neutrino astronomy, laser cooling and, recently, the AdS/CFT correspondence.

A couple years ago, I realised during a spell of daydreaming that even though I have stuck with physics, my act of ‘dropping’ particle physics as a specialty had left me without an edge as a writer. Just physics was and is too broad – even if there are very few others in India writing on it in the press, giving me lots of room to display my skills (such as they are). I briefly considered and rejected quantum computing and BECCS technologies – the former because its stories were often bursting with hype, especially in my neck of the woods, and the latter because, while it seemed important, it didn’t sit well morally. I was indifferent towards them because they were centered on technologies whereas I wanted to write about pure, supposedly boring science.

In all, penning an article commemorating the tenth anniversary of the announcement of the Higgs boson’s discovery brought back pleasant memories of my early days at The Hindu but also reminded me of this choice that I still need to make, for my sake. I don’t know if there is a clear winner yet, although quantum physics more broadly and condensed-matter physics more specifically are appealing. This said, I’m also looking forward to returning to writing more about physics in general, paralleling the evolution of The Wire Science itself (some announcements coming soon).

I should also note that I started blogging in 2008, when I was still an undergraduate student of mechanical engineering, in order to clarify my own knowledge of and thoughts on particle physics.

So in all, today is a special day.


25 years of Maldacena’s bridge

Twenty-five years go, in 1997, an Argentine physicist named Juan Martin Maldacena published what would become the most highly cited physics paper in history (more than 20,000 to date). In the paper, Maldacena described a ‘bridge’ between two theories that describe how our world works, but separately, without meeting each other. These are the field theories that describe the behaviour of energy fields (like the electromagnetic fields) and subatomic particles, and the theory of general relativity, which deals with gravity and the universe at the largest scales.

Field theories have many types and properties. One of them is a conformal field theory: a field theory that doesn’t change when it undergoes a conformal transformation – i.e. one which preserves angles but not lengths pertaining to the field. As such, conformal field theories are said to be “mathematically well-behaved”.

In relativity, space and time are unified into the spacetime continuum. This continuum can broadly exist in one of three possible spaces (roughly, universes of certain ‘shapes’): de Sitter space, Minkowski space and anti-de Sitter space. de Sitter space has positive curvature everywhere – like a sphere (but is empty of any matter). Minkowski space has zero curvature everywhere – i.e. a flat surface. Anti-de Sitter space has negative curvature everywhere – like a hyperbola.

A sphere, a hyperbolic surface and a flat surface. Credit: NASA

Because these shapes are related to the way our universe looks and works, cosmologists have their own way to understand these spaces. If the spacetime continuum exists in de Sitter space, the universe is said to have a positive cosmological constant. Similarly, Minkowski space implies a zero cosmological constant and anti-de Sitter space a negative cosmological constant. Studies by various space telescopes have found that our universe has a positive cosmological constant, meaning ‘our’ spacetime continuum occupies a de Sitter space (sort of, since our universe does have matter).

In 1997, Maldacena found that a description of quantum gravity in anti-de Sitter space in N dimensions is the same as a conformal field theory in N – 1 dimensions. This – called the AdS/CFT correspondence – was an unexpected but monumental discovery that connected two kinds of theories that had thus far refused to cooperate. (The Wire Science had a chance to interview Maldacena about his past and current work in 2018, in which he provided more insights on AdS/CFT as well.)

In his paper, Maldacena demonstrated his finding by using the example of string theory as a theory of quantum gravity in anti-de Sitter space – so the finding was also hailed as a major victory for string theory. String theory is a leading contender for a theory that can unify quantum mechanics and general relativity. However, we have found no experimental evidence of its many claims. This is why the AdS/CFT correspondence is also called the AdS/CFT conjecture.

Nonetheless, thanks to the correspondence, (mathematical) physicists have found that some problems that are hard on the ‘AdS’ side are much easier to crack on the ‘CFT’ side, and vice versa – all they had to do was cross Maldacena’s ‘bridge’! This was another sign that the AdS/CFT correspondence wasn’t just a mathematical trick but could be a legitimate description of reality.

So how could it be real?

The holographic principle

In 1997, Maldacena proved that a string theory in five dimensions was the same as a conformal field theory in four dimensions. However, gravity in our universe exists in four dimensions – not five. So the correspondence came close to providing a unified description of gravity and quantum mechanics, but not close enough. Nonetheless, it gave rise to the possibility that an entity that existed in some number of dimensions could be described by another entity that existed in one fewer dimensions.

Actually, in fact, the AdS/CFT correspondence didn’t give rise to this possibility but proved it, at least mathematically; the awareness of the possibility had existed for many years until then, as the holographic principle. The Dutch physicist Gerardus ‘t Hooft first proposed it and the American physicist Leonard Susskind in the 1990s brought it firmly into the realm of string theory. One way to state the holographic principle, in the words of physicist Matthew Headrick, is thus:

“The universe around us, which we are used to thinking of as being three dimensional, is actually at a more fundamental level two-dimensional and that everything we see that’s going on around us in three dimensions is actually happening in a two-dimensional space.”

This “two-dimensional space” is the ‘surface’ of the universe, located at an infinite distance from us, where information is encoded that describes everything happening within the universe. It’s a mind-boggling idea. ‘Information’ here refers to physical information, such as, to use one of Headrick’s examples, “the positions and velocities of physical objects”. In beholding this information from the infinitely faraway surface, we apparently behold a three-dimensional reality.

It bears repeating that this is a mind-boggling idea. We have no proof so far that the holographic principle is a real description of our universe – we only know that it could describe our reality, thanks to the AdS/CFT correspondence. This said, physicists have used the holographic principle to study and understand black holes as well.

In 1915, Albert Einstein’s general theory of relativity provided a set of complicated equations to understand how mass, the spacetime continuum and the gravitational force are related. Within a few months, physicists Karl Swarzschild and Johannes Droste, followed in subsequent years by Georges Lemaitre, Subrahmanyan Chandrasekhar, Robert Oppenheimer and David Finkelstein, among others, began to realise that one of the equations’ exact solutions (i.e. non-approximate) indicated the existence of a point mass around which space was wrapped completely, preventing even light from escaping from inside this space to outside. This was the black hole.

Because black holes were exact solutions, physicists assumed that they didn’t have any entropy – i.e. that its insides didn’t have any disorder. If there had been such disorder, it should have appeared in Einstein’s equations. It didn’t, so QED. But in the early 1970s, the Israeli-American physicist Jacob Bekenstein noticed a problem: if a system with entropy, like a container of hot gas, was thrown into the black hole, and the black hole doesn’t have entropy, where does the entropy go? It had to go somewhere; otherwise, the black hole would violate the second law of thermodynamics – that the entropy of an isolated system, like our universe, can’t decrease.

Bekenstein postulated that black holes must also have entropy, and that the amount of entropy is proportional to the black hole’s surface area, i.e. the area of the event horizon. Bekenstein also worked out that there is a limit to the amount of entropy a given volume of space can contain, as well as that all black holes could be described by just three observable attributes: their mass, electric charge and angular momentum. So if a black hole’s entropy increases because it has swallowed some hot gas, this change ought to manifest as a change in one, some or all of these three attributes.

Taken together: when some hot gas is tossed into a black hole, the gas would fall into the event horizon but the information about its entropy might appear to be encoded on the black hole’s surface, from the point of view of an observer located outside and away from the event horizon. Note here that the black hole, a sphere, is a three-dimensional object whereas its surface is a flat, curved sheet and therefore two-dimensional. That is, all the information required to describe a 3D black hole could in fact be encoded on its 2D surface – which evokes the AdS/CFT correspondence!

However, that the event horizon of a black hole preserves information about objects falling into the black hole gives rise to another problem. Quantum mechanics requires all physical information (like “the positions and velocities of physical objects”, in Headrick’s example) to be conserved. That is, such information can’t ever be destroyed. And there’s no reason to expect it will be destroyed if black holes lived forever – but they don’t.

Stephen Hawking found in the 1970s that black holes should slowly evaporate by emitting radiation, called Hawking radiation, and there is nothing in the theories of quantum mechanics to suggest that this radiation will be encoded with the information preserved on the event horizon. This, fundamentally, is the black hole information loss problem: either the black hole must shed the information in some way or quantum mechanics must be wrong about the preservation of physical information. Which one is it? This is a major unsolved problem in physics, and it’s just one part of the wider context that the AdS/CFT correspondence inhabits.

For more insights into this discovery, do read The Wire Science‘s interview of Maldacena.

I’m grateful to Nirmalya Kajuri for his feedback on this article.


Analysis Tech

The problem that ‘crypto’ actually solves

From ‘Cryptocurrency Titan Coinbase Providing “Geo Tracking Data” to ICE’The Intercept, June 30, 2022:

Coinbase, the largest cryptocurrency exchange in the United States, is selling Immigrations and Customs Enforcement a suite of features used to track and identify cryptocurrency users, according to contract documents shared with The Intercept. … a new contract document obtained by Jack Poulson, director of the watchdog group Tech Inquiry, and shared with The Intercept, shows ICE now has access to a variety of forensic features provided through Coinbase Tracer, the company’s intelligence-gathering tool (formerly known as Coinbase Analytics).

Coinbase Tracer allows clients, in both government and the private sector, to trace transactions through the blockchain, a distributed ledger of transactions integral to cryptocurrency use. While blockchain ledgers are typically public, the enormous volume of data stored therein can make following the money from spender to recipient beyond difficult, if not impossible, without the aid of software tools. Coinbase markets Tracer for use in both corporate compliance and law enforcement investigations, touting its ability to “investigate illicit activities including money laundering and terrorist financing” and “connect [cryptocurrency] addresses to real world entities.”

Every “cryptocurrency is broken” story these days has a predictable theme: the real world caught up because the real world never went away. The fundamental impetus for cryptocurrencies is the belief of a bunch of people that they can’t trust their money with governments and banks – imagined as authoritarian entities that have centralised decision-making power over private property, including money – and who thus invented a technological alternative that would execute the same solutions the governments and banks did, but sans centralisation, sans trust.

Even more fundamentally, cryptocurrencies embody neither the pursuit to ensure the people’s control of money nor to liberate art-trading from the clutch of racism. Instead, they symbolise the abdication of the responsibility to reform banking and finance – a far more arduous process that is also more constitutive and equitable. They symbolise the thin line between democracy and majoritarianism: they claimed to have placed the tools to validate financial transactions in the hands of the ‘people’ but fail to grasp that these tools will still be used in the same world that apparently created the need for cryptocurrencies. In this context, I highly recommend this essay on the history of the socio-financial forces that inevitably led to the popularity of cryptocurrencies.

These (pseudo)currencies have often been rightly described as a solution looking for a problem, because the fact remains that the ‘problem’ they do solve is public non-participation in governance. Its proponents just don’t like to admit it. Who would?

The identity of cryptocurrencies may once have been limited to technological marvels and the play-things of mathematicians and financial analysts, but their foundational premise bears a deeper, more dispiriting implication. As the value of one virtual currency after the next comes crashing down, after cryptocurrency-based trading and financing schemes come a cropper, and after their promises to be untraceable, decentralised and uncontrollable have been successively falsified, the whole idea ought to be revealed for what it is: a cynical social engineering exercise to pump even more money from the ‘bottom’ of the pyramid to the ‘top’. Yet, the implication: cryptocurrencies will persist because they are vehicles of the libertarian ideologies of their proponents. To attempt to ‘stop’ them is to attempt to stop the ideologues themselves.


What the bitcoin price drop reveals about ‘crypto’

One of the definitive downsides of cryptocurrencies raised its head this week when the nosediving price of bitcoin – brought on by the Luna/Terra crash and subsequent cascading effects – rendered bitcoin mining less profitable. One bitcoin today costs $19,410, so it’s hard to imagine this state of affairs has come to pass – but this is why understanding the ‘permissionless’ nature of cryptocurrency blockchains is important.

Verifying bitcoin transactions requires computing power. Computing power (think of processing units on your CPU) costs money. So those bitcoin users who provide this power need to be compensated for this expense or the bitcoins ecosystem will make no financial sense. This is why the bitcoin blockchain generates a token when users provide computing power to verify transactions. This process is called mining: the computing power verifies each transaction by solving a complex math problem whose end result adds the transaction to the blockchain, in return for which the blockchain spits out a token (or a fraction of it, averaged over time).

The idea is that these users should be able to use this token to pay for the computing power they’re providing. Obviously this means these tokens should have real value, like dollar value. And this is why bitcoin’s price dropping below a certain figure is bad news for those providing the computing power – i.e. the miners.

Bitcoin mining today is currently the preserve of a few mining conglomerates, instead of being distributed across thousands of individual miners, because these conglomerates sought to cash in on bitcoin’s dollar value. So if they quit the game or reduce their commitment to mining, the rate of production of new bitcoins will slow, but that’s a highly secondary outcome; the primary outcome will be less power being available to verify transactions, which will considerably slow the ability to use bitcoins to do cryptocurrency things.

Bitcoin’s dropping value also illustrates why so many cryptocurrency investment schemes – including those based on bitcoin – are practically Ponzi schemes. In the real world (beyond blockchains), the cost of computing power will but increase over time. This is because of inflation, because of the rising cost of the carbon footprint and because the blockchain produces tokens less often over time. So to keep the profits from mining from declining, the price of bitcoin has to increase, which implies the need for speculative valuation, which then paves the way for pump-and-dump and Ponzi schemes.

permissioned blockchain, as I have written before, does not provide rewards for contributing computing power because it doesn’t need to constantly incentivise its users to continue using the blockchain and verify transactions. Specifically, a permissioned blockchain uses a central authority that verifies all transactions, whereas a permissionless blockchain seeks to delegate this responsibility to the users themselves. Think of millions of people exchanging money with each other through a bank – the bank is the authority and the system is a permissioned blockchain; in the case of cryptocurrencies, which are defined by permissionless blockchains, the people exchanging the money also verify each other’s transactions.

This is what leads to the complexity of cryptocurrencies and, inevitably, together with real-world cynicism, an abundance of opportunities to fail. Or, as Robert Reich put it, “all Ponzi schemes topple eventually”.

Note: The single-quotation marks around ‘crypto’ in the headline is because I think the term ‘crypto’ belongs to ‘cryptography’, not ‘cryptocurrency’.


How do you measure peacefulness?

The study was conceived by Australian technology entrepreneur Steve Killelea [in 2007], and is endorsed by individuals such as former UN Secretary-General Kofi Annan, the Dalai Lama, archbishop Desmond Tutu, former President of Finland and 2008 Nobel Peace Prize laureate Martti Ahtisaari, Nobel laureate Muhammad Yunus, economist Jeffrey Sachs, former president of Ireland Mary Robinson, former Deputy Secretary-General of the United Nations Jan Eliasson and former United States president Jimmy Carter. The updated index is released each year at events in London, Washington, DC, and at the United Nations Secretariat in New York.

This is a passage from the Wikipedia article on an entity called the ‘Global Peace Index’, which “measures the relative position of nations’ and regions’ peacefulness”. Indices are flawed but useful. Their most significant flaw – and it’s quite significant – is that they attempt to distill out of the complex interactions of a host of factors a single number that, compared to another of its kind, is supposed to enable value judgments of ‘better’ or ‘worse’.

For example, an academic freedom report published in 2020 gave India a score of 0.352 and Pakistan a score of 0.554. Does this mean all academic centres in India are less academically free than all of those in Pakistan? No. Does this mean Pakistan has 1.5x more academic freedom than India does? Not at all. Indices are useful in a very narrow context, but within that niche, they can be a force for good. There’s a reason the puffy-chested Indian government gets so worked up when the World Press Freedom Index and the Global Hunger Index are published.

In particular, indices are most useful when they’re compared to themselves. If India’s press-freedom index value dropped from X in 2020 to Y in 2022 (because the government is going around demolishing the homes of dissenters), it’s a snapshot of a real deterioration – a problem that needs fixing by reversing the trend (less by massaging the data, as our leaders have become wont to do, and more by improving freedom for journalists). But there’s an index on the block whose usefulness by all counts, even in the self-referential niche, seems dangerous. This is the Global Peace Index. The 2022 edition was published earlier this week, and based on which a Business Insider article lamented that violence was costing India just too much money (Rs 50.36 lakh crore) and that this is why the country had to get a grip on it.

A crucial thing about understanding peace (in a given place and time), and which lies squarely in the domain of those things that indices don’t record, is how peace was achieved. For example, India’s freedom struggle might have pulled down the country’s score on the Global Peace Index but at the same time it was justified and led to a better kind of peace for the whole region. Peace is not just the absence of violence but the absence of conditions that give rise to violence, now and forever, in sustainable fashion. This is why it’s possible to justify some forms of violence in the pursuit of some constitutionally determined forms of peace.

Recently, a couple of my friends, who work in the corporate sector and whose shared philosophy is decidedly libertarian, argued with me over the justification of protest actions like rail roko and bandh. They contended that these activities constituted a violence against the many people whose livelihoods required the affected services. However, their philosophy stopped there, refusing to take the next logical step: it’s by disrupting the provision of these services that protestors get and hold the governmnent’s attention. (Plus the Indian government has the Essential Services Maintenance Act 1968 to ensure not all of the affected services become unavailable.) Why, through his Dandi march, M.K. Gandhi sought to encourage people to not pay their taxes to the British government – a form of economic violence.

To be sure, violence isn’t just physical; it’s also economic, social, cultural, linguistic; it’s gendered, caste-based, class-based and faith-based. The peace index report acknowledges this when it highlights its ‘Positive Peace Index’ – a measure of “the attitudes, institutions and structures that create and sustain peaceful societies”; Its value “measures the level of societal resilience of a nation or region”. According to the report’s website, the lower the score, the better.

But then, China and Saudi Arabia have lower scores than India. This is bizarre. KSA is a monarchy and China is an autocracy; in both countries, personal liberties are highly restricted and there are stringent, and in many cases Kafkaesque, punishments for falling afoul of state policy. The way of life imposed by these socio-political structures also constitutes violence. Yet the scores of these countries are comparable to those of Cuba, Mexico and Namibia. I would rank India better because I can (still, with some privileges) speak out against my government without fear of repercussions. Israel’s score, in fact, is lower than that of Palestine, while Russia has a marginally lower score than does Ukraine. It’s inexplicable.

The India-specific portions of the peace index’s report also illustrate the report’s problems at the sub-national level. To quote:

Some of the countries to record the biggest deteriorations [in violent demonstrations since 2008] were India, Colombia, Bangladesh and Brazil. … [India] ranks as the 135th most peaceful nation in the 2022 GPI. The country experienced an improvement of 1.4 per cent in overall peacefulness over the past year, driven by an improvement in the Ongoing Conflict domain. However, India experienced an uptick in the violent crime and perceptions of criminality indicators. … In 2020 and 2021, Indian farmers protested against newly introduced laws that removed some guarantees and subsidies on agricultural products.

First, the report has obtained the data for the ‘level of violent crime’ indicator from the Economist Intelligence Unit (EIU). The EIU’s scoring question for this indicator is: “Is violent crime likely to pose a significant problem for government and/or business over the next two years?” It’s hard not to wonder if, from the right-wing’s point of view, “violent crime” includes that perpetrated by “urban naxals” when they protested against the Citizenship (Amendment) Act 2019. Uttar Pradesh Chief Minister Yogi Adityanath thought so before he was forced to refund Rs 22 lakh he had collected from the protestors. The Delhi police thought so when its chargesheet for the 2020 riots was composed of people whose houses had been burnt down, whose bones broken and whose temples desecrated – and people who had called on the police to arrest BJP leader Kapil Mishra for instigating the riot. How do you figure “perception of criminality” here?

Second, the report discusses the protests against the three farm laws in a paragraph about “violent demonstrations”, in the same breath and without any qualifications that the protests were peaceful but turned violent when its participants had to defend themselves – including when the son of a national leader ran some of them over with his vehicle and when their attempt to enter Delhi was met with a water cannon and a lathi charge, among other incidents.

The farmers were demanding higher minimum support prices and lower input costs – hardly the sort of thing that requires violence to fulfil but did because Prime Minister Narendra Modi had no other way to walk away from his promises to Ambani/Adani. Who perpetrated the real violence here – the national leader who doomed India’s farmers so industrialist tycoons would continue to fund his campaigns of communalism or the farmers who blocked roads and highways demanding that he not? Was the ‘Bharat Bandh’ that disrupted activities in several crucial sectors on March 28, 2022, more violent than the “anti-people policies” of the same national leader that they were protesting?

A peace index that can’t settle these questions won’t see the difference between a spineless and a spineful people.

Analysis Tech

Tech solutions to household labour are also problems

Just to be clear, the term ‘family’ in this post refers to a cis-het nuclear family unit.

Tanvi Deshpande writing for Indiaspend, June 12, 2022:

The Union government’s ambitious Jal Jeevan Mission (JJM) aims to provide tap water to every household in rural India by 2024. Until now, 50% of households have a tap connection, an improvement from August 2019, when the scheme started and 17% of households had a tap connection. The mission’s dashboard shows that in Take Deogao Gram Panchayat that represents Bardechi Wadi, only 32% of the households have tap connections. Of these, not a single one has been provided to Pardhi’s hamlet.

This meant, for around five months every summer, women and children would rappel down a 60-foot well and spend hours waiting for water to seep into the bottom. In India, filling water for use at home is largely a woman’s job. Globally, women and girls spend 200 million hours every day collecting water, and in Asia, one round trip to collect water takes 21 minutes, on average, in rural areas.

The water pipeline has freed up time for Bardechi Wadi’s women and children but patriarchal norms, lack of a high school in the village and of other opportunities for development means that these free hours have just turned into more time for household chores, our reporting found.

Now these women don’t face the risk of death while fetching water but, as Deshpande has written, the time and trouble that the water pipeline has saved them will now be occupied by new chores and other forms of labour. There may have been a time when the latter might have seemed like the lesser of those two evils, but it is long gone. Today, in the climate crisis era – which often manifests as killer heatwaves in arid regions that are already short on water – the problem is access to leisure, to cooling and to financial safeguards. When women are expected to do more chores because they have the time, they lose access to leisure, which is important at least to cool off, but better yet because it is a right per se (Universal Declaration of Human Rights, article 24).

This story is reminiscent of the effects of the introduction of home appliances into the commercial market. I read a book about a decade ago that documented, among other things, how the average amount of time women (in the US) spent doing household chores hadn’t changed much between the 1920s and the 2000s, even though it coincided wholly with the second industrial revolution. This was because – as in the case of the pipeline of Bardechi Wadi – the purchase and use of these devices freed up women’s time for even more chores. We need the appliances as much as we need the pipeline, just that men should also do household chores. However, the appliances also presented and present more problems than those that pertain to society’s attitudes towards how women should spend their time.

1. Higher expectations – With the availability of household appliances (like the iron box, refrigerator, washing machine, dish washer, oven, etc.), the standards for various chores shot up as did what we considered to be comfortable living – but what we expected of women didn’t change. So suddenly the women of the house were also responsible for ensuring that the men’s shirts and pants were all the more crinkle-less, that food was served fresh and hot all the time, etc. as well as to enliven family life by inventing/recreating food recipes, serving and cleaning up, etc.

2. Work + chores – The introduction of more, and more diverse, appliances into the market, aspirations and class mobility together paralleled an increase in women’s labour-force participation through the 20th century. But before these women left for their jobs and after they got home, they still had to household chores as well – including cooking and packing lunch for themselves and for their husbands and/or children, doing the laundry, shopping for groceries, etc.

3. Think about the family – The advent of tech appliances also foisted on women two closely related responsibilities: to ensure the devices worked as intended and to ensure they fit with the family-unit’s ideals and aspirations. As Manisha Aggarwal-Schifellite wrote in 2016: “The automatic processes of programming the coffeemaker, unlocking an iPad with a fingerprint, or even turning on the light when you get home are the result of years of marketing that create a household problem (your home is too dark, your family too far-flung, your food insufficiently inventive), solves it with a new product, and leaves women to clean up the mess when the technology fails to deliver on its promises”.

In effect, through the 20th century, industrialisation happened in two separate ways within the household and without. To use writer Ellen Goodman’s evocative words from a 1983 article: “At the beginning of American history …, most chores of daily life were shared by men and women. To make a meal, men chopped the wood, women cooked the stew. One by one, men’s tasks were industrialized outside the home, while women’s stayed inside. Men stopped chopping wood, but women kept cooking.”

The diversity of responsibilities imposed by household appliances exacts its own cost. A necessary condition of men’s help around the house is that they – we – must also constantly think about which task to perform and when, instead of expecting to be told what to do every time. This is because, by expecting periodic reminders, we are still forcing women to retain the cognitive burden associated with each chore. If you think you’re still helping by sharing everything except the cognitive burden, you’re wrong. Shifting between tasks affects one’s ability to focus, performance and accuracy and increases forgetfulness. Psychologists call this the switch cost.

It is less clear to me than it may be to others as to the different ways in which the new water pipeline through Bardechi Wadi will change the lives of the women there. But without the men of the village changing how they think about their women and their ‘responsibilities to the house’, we can’t expect anything meaningful. At the same time, the effects of the climate crisis will keep inflating the price these women pay in terms of their psychological, physical and sexual health and agency.


A giant leap closer to the continuous atom laser

One of the most exotic phases of matter is called the Bose-Einstein condensate. As its name indicates, this type of matter is one whose constituents are bosons – which are basically all subatomic particles whose behaviour is dictated by the rules of Bose-Einstein statistics. These particles are also called force particles. The other kind are matter particles, or fermions. Their behaviour is described by the rules of Fermi-Dirac statistics. Force particles and matter particles together make up the universe as we know it.

To be a boson, a particle – which can be anything from quarks (which make up protons and neutrons) to entire atoms – needs to have a spin quantum number of certain values. (All of a particle’s properties can be described by the values of four quantum numbers.) An important difference between fermions and bosons is that Pauli’s exclusion principle doesn’t apply to bosons. The principle states that in a given quantum system, no two particles can have the same set of four quantum numbers at the same time. When two particles have the same four quantum numbers, they are said to occupy the same state. (‘States’ are not like places in a volume; instead, think of them more like a set of properties.) Pauli’s exclusion principle forbids fermions from doing this – but not bosons. So in a given quantum system, all the bosons can occupy the same quantum state if they are forced to.

For example, this typically happens when the system is cooled to nearly absolute zero – the lowest temperature possible. (The bosons also need to be confined in a ‘trap’ so that they don’t keep moving around or combine with each other to form other particles.) More and more energy being removed from the system is equivalent to more and more energy being removed from the system’s constituent particles. So as fermions and bosons possess less and less energy, they occupy lower and lower quantum states. But once all the lowest fermionic states are occupied, fermions start occupying the next lowest states, and so on. This is because of the principle. Bosons on the other hand are all able to occupy the same lowest quantum state. When this happens, they are said to have formed a Bose-Einstein condensate.

In this phase, all the bosons in the system move around like a fluid – like the molecules of flowing water. A famous example of this is superconductivity (at least of the conventional variety). When certain materials are cooled to near absolute zero, their electrons – which are fermions – overcome their mutual repulsion and pair up with each other to form composite pairs called Cooper pairs. Unlike individual electrons, Cooper pairs are bosons. They go on to form a Bose-Einstein condesate in which the Cooper pairs ‘flow’ through the material. In the material’s non-superconducting state, the electrons would have scattered by some objects in their path – like atomic nuclei or vibrations in the lattice. This scattering would have manifested as electrical resistance. But because Cooper pairs have all occupied the same quantum state, they are much harder to scatter. They flow through the material as if they don’t experience any resistance. This flow is what we know as superconductivity.

Bose-Einstein condensates are a big deal in physics because they are a macroscopic effect of microscopic causes. We can’t usually see or otherwise directly sense the effects of most quantum-physical phenomena because they happen on very small scales, and we need the help of sophisticated instruments like electron microscopes and particle accelerators. But when we cool a superconducting material to below its threshold temperature, we can readily sense the presence of a superconductor by passing an electric current through it (or using the Meissner effect). Macroscopic effects are also easier to manipulate and observe, so physicists have used Bose-Einstein condensates as a tool to probe many other quantum phenomena.

While Albert Einstein predicted the existence of Bose-Einstein condensates – based on work by Satyendra Nath Bose – in 1924, physicists had the requisite technologies and understanding of quantum mechanics to be able to create them in the lab only in the 1990s. These condensates were, and mostly still are, quite fragile and can be created only in carefully controlled conditions. But physicists have also been trying to figure out how to maintain a Bose-Einstein condensate for long periods of time, because durable condensates are expected to provide even more research insights as well as hold potential applications in particle physics, astrophysics, metrology, holography and quantum computing.

An important reason for this is wave-particle duality, which you might recall from high-school physics. Louis de Broglie postulated in 1924 that every quantum entity could be described both as a particle and as a wave. The Davisson-Germer experiment of 1923-1927 subsequently found that electrons – which were until then considered to be particles – behaved like waves in a diffraction experiment. Interference and diffraction are exhibited by waves, so the experiment proved that electrons could be understood as waves as well. Similarly, a Bose-Einstein condensate can be understood both in terms of particle physics and in terms of wave physics. Just like in the Davisson-Germer experiment, when physicists set up an experiment to look for an interference pattern from a Bose-Einstein condensate, they succeeded. They also found that the interference pattern became stronger the more bosons they added to the condensate.

Now, all the bosons in a condensate have a coherent phase. The phase of a wave measures the extent to which the wave has evolved in a fixed amount of time. When two waves have coherent phase, both of them will have progressed by the same amount in the same span of time. Phase coherence is one of the most important wave-like properties of a Bose-Einstein condensate because of the possibility of a device called an atom laser.

‘Laser’ is an acronym for ‘light amplification by stimulated emission of radiation’. The following video demonstrates its working principle better than I can in words right now:

The light emitted by an optical laser is coherent: it has a constant frequency and comes out in a narrow beam if the coherence is spatial or can be produced in extremely short pulses if the coherence is temporal. An atom laser is a laser composed of propagating atoms instead of photons. As Wolfgang Ketterle, who led the creation of the first Bose-Einstein condensate and later won a Nobel Prize for it, put it, “The atom laser emits coherent matter waves whereas the optical laser emits coherent electromagnetic waves.” Because the bosons of a Bose-Einstein condensate are already phase-coherent, condensates make excellent sources for an atom laser.

The trick, however, lies in achieving a Bose-Einstein condensate of the desired (bosonic) atoms and then extracting a few atoms into the laser while replenishing the condensate with more atoms – all without letting the condensate break down or the phase-coherence being lost. Physicists created the first such atom laser in 1996 but it did not have a continuous emission nor was very bright. Researchers have since built better atom lasers based on Bose-Einstein condensates, although they remain far from being usable in their putative applications. An important reason for this is that physicists are yet to build a condensate-based atom laser that can operate continuously. That is, as atoms from the condensate lase out, the condesate is constantly replenished, and the laser operates continuously for a long time.

On June 8, researchers from the University of Amsterdam reported that they had been able to create a long-lived, sort of self-sustaining Bose-Einstein condensate. This brings us a giant step closer to a continuously operating atom laser. Their setup consisted of multiple stages, all inside a vacuum chamber.

In the first stage, strontium atoms (which are bosons) started from an ‘oven’ maintained at 850 K and were progressively laser-cooled while they made their way into a reservoir. (Here is a primer of how laser-cooling works.) The reservoir had a dimple in the middle. In the second stage, the atoms were guided by lasers and gravity to descend into this dimple, where they had a temperature of approximately 1 µK, or one-millionth of a kelvin. As the dimple became more and more crowded, it was important for the atoms here to not heat up, which could have happened if some light had ‘leaked’ into the vacuum chamber.

To prevent this, in the third stage, the physicists used a carefully tuned laser shined only through the dimple that had the effect of rendering the strontium atoms mostly ‘transparent’ to light. According to the research team’s paper, without the ‘transparency beam’, the atoms in the dimple had a lifetime of less than 40 ms, whereas with the beam, it was more than 1.5 s – a 37x difference. At some point, when a sufficient number of atoms had accumulated in the dimple, a Bose-Einstein condensate formed. In the fourth stage, an effect called Bose stimulation kicked in. Simply put, as more bosons (strontium atoms, in this case) transitioned into the condensate, the rate of transition of additional bosons also increased. Bose stimulation thus played the role that the gain medium plays in an optical laser. The size of the condensate grew until it matched the rate of loss of atoms out of the dimple, and reached an equilibrium.

And voila! With a steady-state Bose-Einstein condensate, the continuous atom laser was almost ready. The physicists have acknowledged that their setup can be improved in many ways, including by making the laser-cooling effects more uniform, increasing the lifetime of strontium atoms inside the dimple, reducing losses due to heating and other effects, etc. At the same time, they wrote that “at all times after steady state is reached”, they found a Bose-Einstein condensate existing in their setup.