Disentangling entanglement
There has been considerable speculation if the winners of this year’s Nobel Prize for physics, due to be announced at 2.30 pm IST on October 8, will include Alain Aspect and Anton Zeilinger. They’ve both made significant experimental contributions related to quantum information theory and the fundamental nature of quantum mechanics, including entanglement.
Their work, at least the potentially prize-winning part of it, is centred on a class of experiments called Bell tests. If you perform a Bell test, you’re essentially checking the extent to which the rules of quantum mechanics are compatible with the rules of classical physics.
Whether or not Aspect, Zeilinger and/or others win a Nobel Prize this year, what they did achieve is worth putting in words. Of course, many other writers, authors, scientists, etc. have already performed this activity; I’d like to redo it if only because writing helps commit things to memory and because the various performers of Bell tests are likely to win some prominent prize, given how modern technologies like quantum cryptography are inflating the importance of their work, and at that time I’ll have ready reference material.
(There is yet another reason Aspect and Zeilinger could win a Nobel Prize. As with the medicine prizes, many of whose laureates previously won a Lasker Award, many of the physics laureates have previously won the Wolf Prize. And Aspect and Zeilinger jointly won the Wolf Prize for physics in 2010 along with John Clauser.)
The following elucidation is divided into two parts: principles and tests. My principal sources are Wikipedia, some physics magazines, Quantum Physics for Poets by Leon Lederman and Christopher Hill (2011), and a textbook of quantum mechanics by John L. Powell and Bernd Crasemann (1998).
§
Principles
From the late 1920s, Albert Einstein began to publicly express his discomfort with the emerging theory of quantum mechanics. He claimed that a quantum mechanical description of reality allowed “spooky” things that the rules of classical mechanics, including his theories of relativity, forbid. He further contended that both classical mechanics and quantum mechanics couldn’t be true at the same time and that there had to be a deeper theory of reality with its own, thus-far hidden variables.
Remember the Schrödinger’s cat thought experiment: place a cat in a box with a bowl of poison and close the lid; until you open the box to make an observation, the cat may be considered to be both alive and dead. Erwin Schrödinger came up with this example to ridicule the implications of Niels Bohr’s and Werner Heisenberg’s idea that the quantum state of a subatomic particle, like an electron, was described by a mathematical object called the wave function.
The wave function has many unique properties. One of these is superposition: the ability of an object to exist in multiple states at once. Another is decoherence (although this isn’t a property as much as a phenomenon common to many quantum systems): when you observed the object. it would probabilistically collapse into one fixed state.
Imagine having a box full of billiard balls, each of which is both blue and green at the same time. But the moment you open the box to look, each ball decides to become either blue or green. This (metaphor) is on the face of it a kooky description of reality. Einstein definitely wasn’t happy with it; he believed that quantum mechanics was just a theory of what we thought we knew and that there was a deeper theory of reality that didn’t offer such absurd explanations.
In 1935, Einstein, Boris Podolsky and Nathan Rosen advanced a thought experiment based on these ideas that seemed to yield ridiculous results, in a deliberate effort to provoke his ‘opponents’ to reconsider their ideas. Say there’s a heavy particle with zero spin – a property of elementary particles – inside a box in Bangalore. At some point, it decays into two smaller particles. One of these ought to have a spin of 1/2 and other of -1/2 to abide by the conservation of spin. You send one of these particles to your friend in Chennai and the other to a friend in Mumbai. Until these people observe their respective particles, the latter are to be considered to be in a mixed state – a superposition. In the final step, your friend in Chennai observes the particle to measure a spin of -1/2. This immediately implies that the particle sent to Mumbai should have a spin of 1/2.
If you’d performed this experiment with two billiard balls instead, one blue and one green, the person in Bangalore would’ve known which ball went to which friend. But in the Einstein-Podolsky-Rosen (EPR) thought experiment, the person in Bangalore couldn’t have known which particle was sent to which city, only that each particle existed in a superposition of two states, spin 1/2 and spin -1/2. This situation was unacceptable to Einstein because it was inimical certain assumptions on which the theories of relativity were founded.
The moment the friend in Chennai observed her particle to have spin -1/2, the one in Mumbai would have known without measuring her particle that it had a spin of 1/2. If it didn’t, the conservation of spin would be violated. If it did, then the wave function of the Mumbai particle would have collapsed to a spin 1/2 state the moment the wave function of the Chennai particle had collapsed to a spin -1/2 state, indicating faster-than-light communication between the particles. Either way, quantum mechanics could not produce a sensible outcome.
Two particles whose wave functions are linked the way they were in the EPR paradox are said to be entangled. Einstein memorably described entanglement as “spooky action at a distance”. He used the EPR paradox to suggest quantum mechanics couldn’t possibly be legit, certainly not without messing with the rules that made classical mechanics legit.
So the question of whether quantum mechanics was a fundamental description of reality or whether there were any hidden variables representing a deeper theory stood for nearly thirty years.
Then, in 1964, an Irish physicist at CERN named John Stewart Bell figured out a way to answer this question using what has since been called Bell’s theorem. He defined a set of inequalities – statements of the form “P is greater than Q” – that were definitely true for classical mechanics. If an experiment conducted with electrons, for example, also concluded that “P is greater than Q“, it would support the idea that quantum mechanics (vis-à-vis electrons) has ‘hidden’ parts that would explain things like entanglement more along the lines of classical mechanics.
But if an experiment couldn’t conclude that “P is greater than Q“, it would support the idea that there are no hidden variables, that quantum mechanics is a complete theory and, finally, that it implicitly supports spooky actions at a distance.
The theorem here was a statement. To quote myself from a 2013 post (emphasis added):
… for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or [faster-than-light] communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed [like electrons or protons].
Zeilinger and Aspect, among others, are recognised for having performed these experiments, called Bell tests.
Technological advancements through the late 20th and early 21st centuries have produced more and more nuanced editions of different kinds of Bell tests. However, one thing has been clear from the first tests, in 1981, to the last: they have all consistently violated Bell’s inequalities, indicating that quantum mechanics does not have hidden variables and our reality does allow bizarre things like superposition and entanglement to happen.
To quote from Quantum Physics for Poets (p. 214-215):
Bell’s theorem addresses the EPR paradox by establishing that measurements on object a actually do have some kind of instant effect on the measurement at b, even though the two are very far apart. It distinguishes this shocking interpretation from a more commonplace one in which only our knowledge of the state of b changes. This has a direct bearing on the meaning of the wave function and, from the consequences of Bell’s theorem, experimentally establishes that the wave function completely defines the system in that a ‘collapse’ is a real physical happening.
Tests
Though Bell defined his inequalities in such a way that they would lend themselves to study in a single test, experimenters often stumbled upon loopholes in the result as a consequence of the experiment’s design not being robust enough to evade quantum mechanics’s propensity to confound observers. Think of a loophole as a caveat; an experimenter runs a test and comes to you and says, “P is greater than Q but…”, followed by an excuse that makes the result less reliable. For a long time, physicists couldn’t figure out how to get rid of all these excuses and just be able to say – or not say – “P is greater than Q“.
If millions of photons are entangled in an experiment, the detectors used to detect, and observe, the photons may not be good enough to detect all of them or the photons may not survive their journey to the detectors properly. This fair-sampling loophole could give rise to doubts about whether a photon collapsed into a particular state because of entanglement or if it was simply coincidence.
To prevent this, physicists could bring the detectors closer together but this would create the communication loophole. If two entangled photons are separated by 100 km and the second observation is made more than 0.0003 seconds after the first, it’s still possible that optical information could’ve been exchanged between the two particles. To sidestep this possibility, the two observations have to be separated by a distance greater than what light could travel in the time it takes to make the measurements. (Alain Aspect and his team also pointed their two detectors in random directions in one of their tests.)
Third, physicists can tell if two photons received in separate locations were in fact entangled with each other, and not other photons, based on the precise time at which they’re detected. So unless physicists precisely calibrate the detection window for each pair, hidden variables could have time to interfere and induce effects the test isn’t designed to check for, creating a coincidence loophole.
If physicists perform a test such that detectors repeatedly measure the particles involved in, say, two labs in Chennai and Mumbai, it’s not impossible for statistical dependencies to arise between measurements. To work around this memory loophole, the experiment simply has to use different measurement settings for each pair.
Apart from these, experimenters also have to minimise any potential error within the instruments involved in the test. If they can’t eliminate the errors entirely, they will then have to modify the experimental design to compensate for any confounding influence due to the errors.
So the ideal Bell test – the one with no caveats – would be one where the experimenters are able to close all loopholes at the same time. In fact, physicists soon realised that the fair-sampling and communication loopholes were the more important ones.
In 1972, John Clauser and Stuart Freedman performed the first Bell test by entangling photons and measuring their polarisation at two separate detectors. Aspect led the first group that closed the communication loophole, in 1982; he subsequently conducted more tests that improved his first results. Anton Zeilinger and his team made advancements on the fair-sampling loophole.
One particularly important experimental result showed up in August 2015: Robert Hanson and his team at the Technical University of Delft, in the Netherlands, had found a way to close the fair-sampling and communication loopholes at the same time. To quote Zeeya Merali’s report in Nature News at the time (lightly edited for brevity):
The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other – and this caused both their partner electrons to become entangled, too. … the team managed to generate 245 entangled pairs of electrons over … nine days. The team’s measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.
By December 2015, Anton Zeilinger and co. were able to close the communication and fair-sampling loopholes in a single test with a 1-in-2-octillion chance of error, using a different experimental setup from Hanson’s. In fact, Zeilinger’s team actually closed three loopholes including the freedom-of-choice loophole. According to Merali, this is “the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct”.
But at the time Hanson et al announced their result, Matthew Leifer, a physicist the Perimeter Institute in Canada, told Nature News (in the same report) that because “we can never prove that [the converse of freedom of choice] is not the case, … it’s fair to say that most physicists don’t worry too much about this.”
We haven’t gone into much detail about Bell’s inequalities themselves but if our goal is to understand why Aspect and Zeilinger, and Clauser too, deserve to win a Nobel Prize, it’s because of the ingenious tests they devised to test Bell’s, and Einstein’s, ideas and the implications of what they’ve found in the process.
For example, Bell crafted his test of the EPR paradox in the form of a ‘no-go theorem’: if it satisfied certain conditions, a theory was designated non-local, like quantum mechanics; if it didn’t satisfy all those conditions, the theory be classified as local, like Einstein’s special relativity. So Bell tests are effectively gatekeepers that can attest whether or not a theory – or a system – is behaving in a quantum way and each loophole is like an attempt to hack the attestation process.
In 1991, Artur Ekert, who would later be acknowledged as one of the inventors of quantum cryptography, realised this perspective could have applications in securing communications. Engineers could encode information in entangled particles, send them to remote locations, and allow detectors there to communicate with each other securely by observing these particles and decoding the information. The engineers can then perform Bell tests to determine if anyone might be eavesdropping on these communications using one or some of the loopholes.