electrons

  • Is the Higgs boson doing its job?

    At the heart of particle physics lies the Standard Model, a theory that has stood for nearly half a century as the best description of the subatomic realm. It tells us what particles exist, how they interact, and why the universe is stable at the smallest scales. The Standard Model has correctly predicted the outcomes of several experiments testing the limits of particle physics. Even then, however, physicists know that it’s incomplete: it can’t explain dark matter, why matter dominates over antimatter, and why the force of gravity is so weak compared to the other forces. To settle these mysteries, physicists have been conducting very detailed tests of the Model, each of which has either tightened their confidence in a hypothetical explanation or has revealed a new piece of the puzzle.

    A central character in this story is a subatomic particle called the W boson — the carrier of the weak nuclear force. Without it, the Sun wouldn’t shine because particle interactions involving the weak force are necessary for nuclear fusion to proceed. W bosons are also unusual among force carriers: unlike photons (the particles of light), they’re massive, about 80-times heavier than a proton. This mass difference — of a massless photon and a massive W boson — arises due to a process called the Higgs mechanism. Physicists first proposed this mechanism in 1964 and confirmed it was real when they found the Higgs boson particle at the Large Hadron Collider (LHC) in 2012.

    But finding the Higgs particle was only the beginning. To prove that the Higgs mechanism really works the way the theory says, physicists need to check its predictions in detail. One of the sharpest tests involves how W bosons scatter off each other at high energies. The key to achieving this is the W boson’s polarisation states. Both photons and W bosons have a property called quantum spin, but whereas for photons its value is zero, for W bosons its non-zero. The spin also has a direction. If it points sideways, the W boson is said to be transverse polarised; if it’s pointing along the particle’s direction of travel, the W boson is said to be longitudinally polarised. The longitudinal ones are special because their behaviour is directly tied to the Higgs mechanism.

    Specifically, if the Higgs mechanism and the Higgs boson don’t exist, calculations involving the longitudinal W bosons scattering off of each other quickly give rise to nonsensical mathematical results in the theory. The Higgs boson acts like a regulator in this engine, preventing the mathematics from ‘blowing up’. In fact, in the 1970s, the theoretical physicists Benjamin Lee, Chris Quigg, and Hugh Thacker showed that without the Higgs boson, the weak force would become uncontrollably powerful at high energies, leading to the breakdown of the theory. Their work was an important theoretical pillar that justified building the colossal LHC machine to search for the Higgs boson particle.

    The terms Higgs boson, Higgs field, and Higgs mechanism describe related but distinct ideas. The Higgs field is a kind of invisible medium thought to fill all of space. Particles like W bosons and Z bosons interact with this field as they move and through that interaction they acquire mass. This is the Higgs mechanism: the process by which particles that would otherwise be massless become heavy.

    The Higgs boson is different: it’s a particle that represents a vibration or a ripple in the Higgs field, just as a photon is a ripple in the electromagnetic field. Its discovery in 2012 confirmed that the field is real and not just something that appears in the mathematics of the theory. But discovery alone doesn’t prove the mechanism is doing everything the theory demands. To test that, physicists need to look at situations where the Higgs boson’s balancing role is crucial.

    The scattering of longitudinally polarised W bosons is a good example. Without the Higgs boson, the probabilities of the scatterings occurring uncontrollably at higher energy, but with the Higgs boson in the picture, they stay within sensible bounds. Observing longitudinally polarised W bosons behaving as predicted is thus evidence for the particle as well as a check on the field and the mechanism behind it.

    Imagine a roller-coaster without brakes. As it goes faster and faster, there’s nothing to stop it from flying off the tracks. The Higgs mechanism is like the braking system that keeps the ride safe. Observing longitudinally polarised W bosons in the right proportions is equivalent to checking that the brakes actually work when the roller-coaster speeds up.

    Another path that physicists once considered and that didn’t involve a Higgs boson at all was called technicolor theory. Instead of a single kind of Higgs boson giving the W bosons their mass, technicolor proposed a brand-new force. Just as the strong nuclear force binds quarks into protons and neutrons, the hypothetical technicolor force would bind new “technifermion” particles into composite states. These bound states would mimic the Higgs boson’s job of giving particles mass, while producing their own new signals in high-energy collisions.

    The crucial test to check whether some given signals are due to the Higgs boson or due to technicolor lies in the behaviour of longitudinally polarised W bosons. In the Standard Model, their scattering is kept under control by the Higgs boson’s balancing act. In technicolor, by contrast, there is no Higgs boson to cancel the runaway growth. The probability of the scattering of longitudinally polarised W bosons would therefore rise sharply with more energy, often leaving clearly excessive signals in the data.

    Thus, observing longitudinally polarised W bosons at consistent with the predictions of the Standard Model, and not finding any additional signals, would also strengthen the case for the Higgs mechanism and weaken that for technicolor and other “Higgs-less” theories.

    At the Large Hadron Collider, the cleanest way to study look for such W bosons is in a phenomenon called vector boson scattering (VBS). In VBS, two protons collide and the quarks inside them emit W bosons. These W bosons then scatter off each other before decaying into lighter particles. The leftover quarks form narrow sprays of particles, or ‘jets’, that fly far forward.

    If the two W bosons happen to have the same electric charge — i.e. both positive or both negative — the process is even more distinctive. This same-sign WW scattering is quite rare and that’s an advantage because then it’s easy to spot in the debris of particle collisions.

    Both ATLAS and CMS, the two giant detectors at the LHC, had previously observed same-sign WW scattering without breaking down the polarisation. In 2021, the CMS detector reported the first hint of longitudinal polarisation but at a statistical significance only of 2.3 sigma, which isn’t good enough (particle physicists prefer at least 3 sigma). So after the LHC completed its second run in 2018, collecting data from around 10 quadrillion collisions between protons, the ATLAS collaboration set out to analyse it and deliver the evidence. This group’s study was published in Physical Review Letters on September 10.

    The challenge of finding longitudinally polarised W bosons is like finding a particular needle in a very large haystack where most of the needles look nearly identical. So ATLAS designed a special strategy.

    When one W boson decays, the result is one electron or muon and one neutrino. If the W boson is positively charged, for example, the decay could be to one anti-electron and one electron-neutrino or to one anti-muon and a muon-neutrino. Anti-electrons and anti-muons are positively charged. If the W boson is negatively charged, the products could one electron and one electron-antineutrino or one muon and one muon-antineutrino. So first, ATLAS zeroed in on the fact that it was looking for two electrons, two muons, or one of each, both carrying the same electric charge. Neutrinos however are really hard to catch and study, so the ATLAS group look for their absence rather than their presence. In all these particle interactions, the law of conservation of momentum holds — which means in a given interaction, a neutrino’s presence can be elucidated when the momenta of the electrons or muons add up to be slightly lower than that of the W boson; the missing amount would have been carried away by the neutrino, like money unaccounted for in a ledger.

    This analysis also required an event of interest to have at least two jets (reconstructed from streams of particles) with a combined energy above 500 GeV and separated widely in rapidity (which is a measure of their angle relative to the beam). This particular VBS pattern — two electrons/muons, two jets, and missing momentum — is the hallmark of same-sign WW scattering.

    Second, even with these strict requirements, impostors creep in. The biggest source of confusion is WZ production, a process in which another subatomic particle called the Z boson decays invisibly or one of its decay products goes unnoticed, making the event resemble WW scattering. Other sources include electrons having their charges mismeasured, jets can masquerading as electrons/muons, and some quarks producing electrons/muons that slip into the sample. To control for all this noise, the ATLAS group focused on control regions: subsets of events that produced a distinct kind of noise that the group could cleanly ‘subtract’ from the data to reveal same-sign WW scattering, thus also reducing uncertainty in the final results.

    Third, and this is where things get nuanced: the differences between transverse and longitudinally polarised W bosons show up in distributions — i.e. how far apart the electrons/muons are in angle, how the jets are oriented, and the energy of the system. But since no single variable could tell the whole story, the ATLAS group combined them using deep neural networks. These machine-learning models were fed up to 20 kinematic variables — including jet separations, particle angles, and missing momentum patterns — and trained to distinguish between three groups:

    (i) Two transverse polarised W bosons;

    (ii) One transverse polarised W boson and one longitudinally polarised W boson; and

    (iii) Both longitudinally polarised W bosons

    Fourth, the group combined the outputs of these neural networks and fit with a maximum likelihood method. When physicists make measurements, they often don’t directly see what they’re measuring. Instead, they see data points that could have come from different possible scenarios. A likelihood is a number that tells them how probable the data is in a given scenario. If a model says events should look like this,” they can ask: “Given my actual data, how likely is that?” And the maximum likelihood method will help them decide the parameters that make the given data most likely to occur.

    For example, say you toss a coin 100 times and get 62 heads. You wonder: is the coin fair or biased? If it’s fair, the chance of exactly 62 heads is small. If the coin is slightly biased (heads with probability 0.62), the chance of 62 heads is higher. The maximum likelihood estimate is to pick the bias, or probability of heads, that makes your actual result most probable. So here the method would say, “The coin’s bias is 0.62” — because this choice maximises the likelihood of seeing 62 heads out of 100.

    In their analysis, the ATLAS group used the maximum likelihood method to check with the LHC data ‘preferred’ a contribution from longitudinal scattering, after subtracting what background noise and transverse-only scattering could explain.

    The results are a milestone in experimental particle physics. In the September 10 paper, ATLAS reported evidence for longitudinally polarised W bosons in same-sign WW scattering with a significance of 3.3 sigma — sufficiently close to 4, which is the calculated significance based on the predictions of the Standard Model. This means the data behaved as theory predicted, with no unexpected excess or deficit.

    It’s also bad news for technicolor theory. By observing longitudinal W bosons at exactly the rates predicted by the Standard Model, and not finding any additional signals, the ATLAS data strengthens the case for the Higgs mechanism providing the check on the W bosons’ scattering probability, rather than the technicolor force.

    The measured cross-section for events with at least one longitudinally polarised W boson was 0.88femtobarns, with an uncertainty of 0.3 femtobarns. These figures essentially mean that there were only a few hundred same-sign WW scattering events in the full dataset of around 10 quadrillion proton-proton collisions. The fact that ATLAS could pull this signal out of such a background-heavy environment is a testament to the power of modern machine learning working with advanced statistical methods.

    The group was also able to quantify the composition of signals. Among others:

    1. About 58% of events were genuine WW scattering
    2. Roughly 16% were from WZ production
    3. Around 18% arose from irrelevant electrons/muons, charge misidentification or the decay of energetic photons

    One way to appreciate the importance of these findings is by analogy: imagine trying to hear a faint melody being played by a single violin in the middle of a roaring orchestra. The violin is the longitudinal signal; the orchestra is the flood of background noise. The neural networks are like sophisticated microphones and filters, tuned to pick out the violin’s specific tone. The fact that ATLAS couldn’t only hear it but also measured its volume to match the score written by the Standard Model is remarkable.

    Perhaps in the same vein, these results are more than just another tick mark for the Standard Model. It’s a direct test of the Higgs mechanism in action. The discovery of the Higgs boson particle in 2012 was groundbreaking but proving that the Higgs mechanism performs its theoretical role requires demonstrating that it regulates the scattering of W bosons. By finding evidence for longitudinally polarised W bosons at the expected rate, ATLAS has done just that.

    The results also set the stage for the future. The LHC is currently being upgraded to a form called the High-Luminosity LHC and it will begin operating later this decade, collect datasets about 10x larger than what the LHC did in its second run. With that much more data, physicists will be able to study differential distributions, i.e. how the rate of longitudinal scattering varies with energy, angle or jet separation. These patterns are sensitive to hitherto unknown particles and forces, such as additional Higgs-like particles or modifications to the Higgs mechanism itself. That is, even small deviations from the Standard Model’s predictions could hint at new frontiers in particle physics.

    Indeed, history has often reminded physicists that such precision studies often uncover surprises. Physicists didn’t discover neutrino oscillations by finding a new particle but by noticing that the number of neutrinos arriving from the Sun at detectors on Earth didn’t match expectations. Similarly, minuscule mismatches between theory and observations in the scattering of W bosons could someday reveal new physics — and if they do, the seeds will have been planted by studies like that of the ATLAS group.

    On the methodological front, the analysis also showcases how particle physics is evolving. ‘Classical’ analyses once banked on tracking single variables; now, deep learning has played a starring role by combining many variables into a single discriminant, allowing ATLAS to pull the faint signal of longitudinally polarised W bosons from the noise. This approach could only become more important as both datasets and physicists’ ambitions expand.

    Perhaps the broadest lesson in all this is that science often advances by the unglamorous task of verifying the details. The discovery of the Higgs boson answered one question but opened many others; among them, measuring how it affects the scattering of W bosons is one of the ore direct ways to probe whether the Standard Model is complete or just the first chapter of a longer story. Either way, the pursuit exemplifies the spirit of checking, rechecking, testing, and probing until scientists truly understand how nature works at extreme precision.

    Featured image: The massive mural of the ATLAS detector at CERN painted by artist Josef Kristofoletti. The mural is located at the ATLAS Experiment site and shows on two perpendicular walls the detector with a collision event superimposed. The event on the large wall shows a simulation of an event that would be recorded in ATLAS if a Higgs boson was produced. The cavern of the ATLAS Experiment with the detector is 100 m directly below the mural. The height of the mural is about 12 m. The actual ATLAS detector is more than twice as big. Credit: Claudia Marcelloni, Michael Barnett/CERN.

  • A transistor for heat

    Quantum technologies and the prospect of advanced, next-generation electronic devices have been maturing at an increasingly rapid pace. Both research groups and governments around the world are investing more attention in this domain.

    India for example mooted its National Quantum Mission in 2023 with a decade-long outlay of Rs 6,000 crore. One of the Mission’s goals, in the words of IISER Pune physics professor Umakant Rapol, is “to engineer and utilise the delicate quantum features of photons and subatomic particles to build advanced sensors” for applications in “healthcare, security, and environmental monitoring”.

    On the science front, as these technologies become better understood, scientists have been paying increasingly more attention to managing and controlling heat in them. These technologies often rely on quantum physical phenomena that appear only at extremely low temperatures and are so fragile that even a small amount of stray heat can destabilise them. In these settings, scientists have found that traditional methods of handling heat — mainly by controlling the vibrations of atoms in the devices’ materials — become ineffective.

    Instead, scientists have identified a promising alternative: energy transfer through photons, the particles of light. And in this paradigm, instead of simply moving heat from one place to another, scientists have been trying to control and amplify it, much like how transistors and amplifiers handle electrical signals in everyday electronics.

    Playing with fire

    Central to this effort is the concept of a thermal transistor. This device resembles an electrical transistor but works with heat instead of electrical current. Electrical transistors amplify or switch currents, allowing the complex logic and computation required to power modern computers. Creating similar thermal devices would represent a major advance, especially for technologies that require very precise temperature control. This is particularly true in the sub-kelvin temperature range where many quantum processors and sensors operate.

    Energy transport at such cryogenic temperatures differs significantly from normal conditions. Below roughly 1 kelvin, atomic vibrations no longer carry most of the heat. Instead, electromagnetic fluctuations — ripples of energy carried by photons — dominate the conduction of heat. Scientists channel these photons through specially designed, lossless wires made of superconducting materials. They keep these wires below their superconducting critical temperatures, allowing only photons to transfer energy between the reservoirs. This arrangement enables careful and precise control of heat flow.

    One crucial phenomenon that allows scientists to manipulate heat in this way is negative differential thermal conductance (NDTC). NDTC defies common intuition. Normally, decreasing the temperature difference between two bodies reduces the amount of heat they exchange. This is why a glass of water at 50º C in a room at 25º C will cool faster than a glass of water at 30º C. In NDTC, however, reducing the temperature difference between two connected reservoirs can actually increase the heat flow between them.

    NDTC arises from a detailed relationship between temperature and the properties of the material that makes up the reservoirs. When physicists harness NDTC, they can amplify heat signals in a manner similar to how negative electrical resistance powers electrical amplifiers.

    A ‘circuit’ for heat

    In a new study, researchers from Italy have designed and theoretically modelled a new kind of ‘thermal transistor’ that they have said can actively control and amplify how heat flows at extremely low temperatures for quantum technology applications. Their findings were published recently in the journal Physical Review Applied.

    To explore NDTC experimentally, the researchers studied reservoirs made of a disordered semiconductor material that exhibited a transport mechanism called variable range hopping (VRH). An example is neutron-transmutation-doped germanium. In VRH materials, the electrical resistance at low temperatures depends very strongly, sometimes exponentially, on temperature.

    This attribute makes them ideal to tune their impedance, a property that controls the material’s resistance to energy flow, simply by adjusting temperature. That is, how well two reservoirs made of VRH materials exchange heat can be controlled by tuning the impedance of the materials, which in turn can be controlled by tuning their temperature.

    In the new study, the researchers reported that impedance matching played a key role. When the reservoirs’ impedances matched perfectly (when their temperatures became equal), the efficiency with which they transferred photonic heat reached a peak. As the materials’ temperatures diverged, heat flow dropped. In fact, the researchers wrote that there was a temperature range, especially as the colder reservoir’s temperature rose to approach that of the warmer one, within which the heat flow increased even as the temperature difference shrank. This effect forms the core of NDTC.

    The research team, associated with the NEST initiative at the Istituto Nanoscienze-CNR and Scuola Normale Superiore, both in Pisa in Italy, have proposed a device they call the photonic heat amplifier. They built it using two VRH reservoirs connected by superconducting, lossless wires. One reservoir was kept at a higher temperature and served as the source of heat energy. The other reservoir, called the central island, received heat by exchanging photons with the warmer reservoir.

    The central island was also connected to two additional metallic reservoirs named the “gate” and the “drain”. These points operated with the same purpose as the control and output terminals in an electrical transistor. The drain stayed cold, allowing the amplified heat signal to exit the system from this point. By adjusting the gate temperature, the team could modulate and even amplify the flow of heat between the source and the drain (see image below).

    To understand and predict the amplifier’s behaviour, the researchers developed mathematical models for all forms of heat transfer within the device. These included photonic currents between VRH reservoirs, electron tunnelling through the gate and drain contacts, and energy lost as vibrations through the device’s substrate.

    (Tunnelling is a quantum mechanical phenomenon where an electron has a small chance of floating through a thin barrier instead of going around it.)

    Raring to go

    By carefully selecting the device parameters — including the characteristic temperature of the VRH material, the source temperature, resistances at the gate and drain contacts, the volume of the central island, and geometric factors — the researchers said they could tailor the device for different amplification purposes.

    They reported two main operating modes. The first was called ‘current modulation amplifier’. In this configuration, the device amplified small variations in thermal input at the gate. In this mode, small oscillations in the gate heat current produced much larger oscillations, up to 15-times greater, in the photon current between the source and the central island and in the drain current, according to the paper. This amplification was efficient down to 20 millikelvin, matching the ultracold conditions required in quantum technologies. The output range of heat current was similarly broad, showing the device’s suitability to amplify heat signals.

    The second mode was called ‘temperature modulation amplifier’. Here, slight changes of only a few millikelvin in the gate temperature, the team wrote, caused the output temperature in the central island to swing by as large as 3.3 times the changes in the input. The device could also handle input temperature ranges over 100 millikelvin. This performance reportedly matched or surpassed other temperature amplifiers already reported in the scientific literature. The researchers also noted that this mode could be used to pre-amplify signals in bolometric detectors used in astronomy telescopes.

    An important ability relevant for practical use is the relaxation time, i.e. how soon after operating once the device returned to its original state, ready for the next run. The amplifier in both configurations showed relaxation times between microseconds and milliseconds. According to the researchers, this speed resulted from the device’s low thermal mass and efficient heat channels. Such a fast response could make it suitable to detect and amplify thermal signals in real time.

    The researchers wrote that the amplifier also maintained good linearity and low distortion across various inputs. In other words, the output heat signal changed proportionally to the input heat signal and the device didn’t add unwanted changes, noise or artifacts to the input signal. Its noise-equivalent power values were also found to rival the best available solid-state thermometers, indicating low noise levels.

    Approaching the limits

    For these promising results, realising this device involves some significant practical challenges. For instance, NDTC depends heavily on precise impedance matching. Real materials inevitably have imperfections, including those due to imperfect fabrication and environmental fluctuations. Such deviations could lower the device’s heat transfer efficiency and reduce the operational range of NDTC.

    The system also banked on lossless superconducting wires being kept well below their critical temperatures. Achieving and maintaining these ultralow temperatures requires sophisticated and expensive refrigeration infrastructure, which adds to the experimental complexity.

    Fabrication also demands very precise doping and finely tuned resistances for the gate and drain terminals. Scaling production to create many devices or arrays poses major technical difficulties. Integrating numerous photonic heat amplifiers into larger thermal circuits risks unwanted thermal crosstalk and signal degradation, a risk compounded by the extremely small heat currents involved.

    Furthermore, the fully photonic design offers benefits such as electrical isolation and long-distance thermal connections. However, it also approaches fundamental physical limits. Thermal conductance caps the maximum possible heat flow through photonic channels. This limitation could restrict how much power the device is able to handle in some applications.

    Then again, many of these challenges are typical of cutting-edge research in quantum devices, and highlight the need for detailed experimental work to realise and integrate photonic heat amplifiers into operational quantum systems.

    If they are successfully realised for practical applications, photonic heat amplifiers could transform how scientists manage heat in quantum computing and nanotechnologies that operate near absolute zero. They could pave the way for on-chip heat control, computers to autonomously stabilise the temperature, and perform thermal logic operations. Redirecting or harvesting waste heat could also improve the efficiency and significantly reduce noise — a critical barrier in ultra-sensitive quantum devices like quantum computers.

    Featured image credit: Lucas K./Unsplash.

  • New LHC data puts ‘new physics’ lead to bed

    One particle in the big zoo of subatomic particles is the B meson. It has a very short lifetime once it’s created. In rare instances it decays to three lighter particles: a kaon, a lepton and an anti-lepton. There are many types of leptons and anti-leptons. Two are electrons/anti-electrons and muons/anti-muons. According to the existing theory of particle physics, they should be the decay products with equal probability: a B meson should decay to a kaon, electron and anti-electron as often as it decays to a kaon, muon and anti-muon (after adjusting for mass, since the muon is heavier).

    In the last 13 years, physicists studying B meson decays had found on four occasions that it decayed to a kaon, electron and anti-electron more often. They were glad for it, in a way. They had worked out the existing theory, called the Standard Model of particle physics, from the mid-20th century in a series of Nobel Prize-winning papers and experiments. Today, it stands complete, explaining the properties of a variety of subatomic particles. But it still can’t explain what dark matter is, why the Higgs boson is so heavy or why there are three ‘generations’ of quarks, not more or less. If the Standard Model is old physics, particle physicists believe there could be a ‘new physics’ out there – some particle or force they haven’t discovered yet – which could really complete the Standard Model and settle the unresolved mysteries.

    Over the years, they have explored various leads for ‘new physics’ in different experiments, but eventually, with more data, the findings have all been found to be in line with the predictions of the Standard Model. Until 2022, the anomalous B meson decays were thought to be a potential source of ‘new physics’ as well. A 2009 study in Japan found that some B meson decays created electron/anti-electrons pairs more often than muons/anti-muon pairs – as did a 2012 study in the US and a 2014 study in Europe. The last one involved the Large Hadron Collider (LHC), operated by the European Organisation for Nuclear Research (CERN) in France, and a detector on it called LHCb. Among other things, the LHCb tracks B mesons. In March 2021, the LHCb collaboration released data qualitatively significant enough to claim ‘evidence’ that some B mesons were decaying to electron/anti-electron pairs more often than to muon/anti-muon pairs.

    But the latest data from the LHC, released on December 20, appears to settle the question: it’s still old physics. The formation of different types of lepton/anti-lepton particle pairs with equal probability is called lepton-flavour universality. Since 2009, physicists had been recording data that suggested that one type of some B meson decays were violating lepton-flavour university, in the form of a previously unknown particle or force acting on the decay process. In the new data, physicists analysed B meson decays in the current as well as one of two other pathways, and at two different energy levels – thus, as the official press release put it, “yielding four independent comparisons of the decays”. The more data there is to compare, the more robust the findings will be.

    This data was collected over the last five years. Every time the LHC operates, it’s called a ‘run’. Each run generates several terabytes of data that physicists, with the help of computers, comb through in search of evidence for different hypotheses. The data for the new analysis was collected over two runs. And it led physicists to conclude that B mesons’ decay does not violate lepton-flavour universality. The Standard Model still stands and, perhaps equally importantly, a 13-year-old ‘new physics’ lead has been returned to dormancy.

    The LHC is currently in its third run; scientists and engineers working with the machine perform maintenance and install upgrades between runs, so each new cycle of operations is expected to produce more as well as more precise data, leading to more high-precision analyses that could, physicists hope, one day reveal ‘new physics’.

  • The awesome limits of superconductors

    On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

    The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

    In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

    According to the CERN press release (emphasis added):

    The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

    The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

    This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

    An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

    This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

    These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

    If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

    Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

    In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

    According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

    While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

    As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

    In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

    But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

    At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

    Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

    Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

    When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

    This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

    1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

    Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

    There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

    The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

    The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

    §

    While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

    One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

    The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

    The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

    … Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

    When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

    In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

    Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

  • Atoms within atoms

    It’s a matter of some irony that forces that act across larger distances also give rise to lots of empty space – although the more you think about it, the more it makes sense. The force of gravity, for example, can act across millions of kilometres but this only means two massive objects can still influence each across this distance instead of having to get closer to do so. Thus, you have galaxies with a lot more space between stars than stars themselves.

    The electromagnetic force, like the force of gravity, also follows an inverse-square law: its strength falls off as the square of the distance – but never fully reaches zero. So you can have an atom with a nucleus of protons and neutrons held tightly together but electrons located so far away that each atom is more than 90% empty space.

    In fact, you can use the rules of subatomic physics to make atoms even more vacuous. Electrons orbit the nucleus in an atom at fixed distances, and when an electron gains some energy, it jumps into a higher orbit. Physicists have been able to excite electrons to such high energies that the atom itself becomes thousands of times larger than an atom of hydrogen.

    This is the deceptively simple setting for the Rydberg polaron: the atom inside another atom, with some features added.

    In January 2018, physicists from Austria, Brazil, Switzerland and the US reported creating the first Rydberg polaron in the lab, based on theoretical predictions that another group of researchers had advanced in October 2015. The concept, as usual, is far simpler than the execution, so exploring the latter should provide a good sense of the former.

    The January 2018 group first created a Bose-Einstein condensate, a state of matter in which a dilute gas of particles called bosons is maintained in an ultra-cold container. Bosons are particles whose quantum spin takes integer values. (Other particles called fermions have half-integer spin). As the container is cooled to near absolute zero, the bosons begin to collectively display quantum mechanical phenomena at the macroscopic scale, essentially becoming a new form of matter and displaying certain properties that no other form of matter has been known to exhibit.

    Atoms of strontium-84, -86 and -88 have zero spin, so the physicists used them to create the condensate. Next, they used lasers to bombard some strontium atoms with photons to impart energy to electrons in the outermost orbits (a.k.a. valence electrons), forcing them to jump to an even higher orbit. Effectively, the atom expands, becoming a so-called Rydberg atom[1]. In this state, if the distance between the nucleus and an excited electron is greater than the average distance between the other strontium atoms in the condensate, then some of the other atoms could technically fit into the Rydberg atom, forming the atom-within-an-atom.

    [1] Rydberg atoms are called so because many of their properties depend on the value of the principal quantum number, which the Swedish physicist Johannes Robert Rydberg first (inadvertently) described in a formula in 1888.

    Rydberg atoms are gigantic relative to other atoms; some are even bigger than a virus, and their interactions with their surroundings can be observed under a simple light microscope. They are relatively long-lived, in that the excited electron decays to its ground state slowly. Astronomers have found them in outer space. However, Rydberg atoms are also fragile: because the electron is already so far from the nucleus, any other particles in the vicinity, even a weak electromagnetic field or a slightly warmer temperature could easily knock the excited electron out of the Rydberg atom and end the Rydberg state.

    Some clever physicists took advantage of this property and used Rydberg atoms as sensitive detectors of single photons of light. They won the Nobel Prize for physics for such work in 2011.

    However, simply sticking one atom inside a Rydberg atom doth not a Rydberg polaron make. A polaron is a quasiparticle, which means it isn’t an actual particle by itself, as the –on suffix might suggest, but an entity that scientists study as if it were a particle. Quasiparticles are thus useful because they simplify the study of more complicated entities by allowing scientists to apply the rules of particle physics to arrive at equally correct solutions.

    This said, a polaron is a quasiparticle that’s also a particle. Specifically, physicists describe the properties and behaviour of electrons inside a solid as polarons because as the electrons interact with the atomic lattice, they behave in a way that electrons usually don’t. So polarons combine the study of electrons and electrons-interacting-with-atoms into a single subject.

    Similarly, a Rydberg polaron is formed when the electron inside the Rydberg atom interacts with the trapped strontium atom. While an atom within an atom is cool enough, the January 2018 group wanted to create a Rydberg polaron because it’s considered to be a new state of matter – and they succeeded. The physicists found that the excited electron did develop a loose interaction with the strontium atoms lying between itself and the Rydberg atom’s nucleus – so loose that even as they interacted, the electron could still remain part of the Rydberg atom without getting kicked out.

    In effect, since the Rydberg atom and the strontium atoms inside it influence each other’s behaviour, they altogether made up one larger complicated assemblage of protons, neutrons and electrons – a.k.a. a Rydberg polaron.

  • Good writing is an atom

    https://twitter.com/HochTwit/status/1174875013708746752

    The act of writing well is like an atom, or the universe. There is matter but it is thinly distributed, with lots of empty space in between. Removing this seeming nothingness won’t help, however. Its presence is necessary for things to remain the way they are and work just as well. Similarly, writing is not simply the deployment of words. There is often the need to stop mid-word and take stock of what you have composed thus far and what the best way to proceed could be, even as you remain mindful of the elegance of the sentence you are currently constructing and its appropriate situation in the overarching narrative. In the end, there will be lots of words to show for your effort but you will have spent even more time thinking about what you were doing and how you were doing it. Good writing, like the internal configuration of a set of protons, neutrons and electrons, is – physically speaking – very little about the labels attached to describe them. And good writing, like the vacuum energy of empty space, acquires its breadth and timelessness because it encompasses a lot of things that one cannot directly see.

  • ‘Weak charge’ measurement holds up SM prediction

    Various dark matter detectors around the world, massive particle accelerators and colliders, powerful telescopes on the ground and in space all have their distinct agendas but ultimately what unites them is humankind’s quest to understand what the hell this universe is on about. There are unanswered questions in every branch of scientific endeavour that will keep us busy for millennia to come.

    Among them, physics seems to be sufferingly uniquely, as it stumbles even as we speak through a ‘nightmare scenario’: the most sensitive measurements we have made of the physical reality around us, at the largest and smallest scales, don’t agree with what physicists have been able to work out on paper. Something’s gotta give – but scientists don’t know where or how they will find their answers.

    The Qweak experiment at the Jefferson Lab, Virginia, is one of scores of experiments around the world trying to find a way out of the nightmare scenario. And Qweak is doing that by studying how the rate at which electrons scatter off a proton is affected by the electrons’ polarisation (a.k.a. spin polarisation: whether the spin of each electron is “left” or “right”).

    Unlike instruments like the Large Hadron Collider, which are very big, operate at much higher energies, are expensive and are used to look for new particles hiding in spacetime, Qweak and others like it make ultra-precise measurements of known values, in effect studying the effects of particles both known and unknown on natural phenomena.

    And if these experiments are able to find that these values deviate at some level from that predicted by the theory, physicists will have the break they’re looking for. For example, if Qweak is the one to break new ground, then physicists will have reason to suspect that the two nuclear forces of nature, simply called strong and weak, hold some secrets.

    However, Qweak’s latest – and possibly its last – results don’t break new ground. In fact, they assert that the current theory of particle physics is correct, the same theory that physicists are trying to break free of.

    Most of us are familiar with protons and electrons: they’re subatomic particles, carry positive and negative charges resp., and are the stuff of one chapter of high-school physics. What students of science find out quite later is that electrons are fundamental particles – they’re not made up of smaller particles – but protons are not. Protons are made up of quarks and gluons.

    Interactions between electrons and quarks/gluons is mediated by two fundamental forces: the electromagnetic and the weak nuclear. The electromagnetic force is much stronger than the aptly named weak nuclear force. On the other hand, it is agnostic to the electron’s polarisation while the weak nuclear force is sensitive to it. In fact, the weak nuclear force is known to respond differently to left- and right-handed particles.

    When electrons are bombarded at protons, the electrons are scattered off. Scientists at measure how often this happens and at what angle, together with the electrons’ polarisation – and try to find correlations between the two sets of data.

    An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0
    An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0

    At Qweak, the electrons were accelerated to 1.16 GeV and bombarded at a tank of liquid hydrogen. A detector positioned near the tank picked up on electrons scattered at angles between 5.8º and 11.6º. By finely tuning different aspects of this setup, the scientists were able to up the measurement precision to 10 parts per billion.

    For example, they were able to achieve a detection rate of 7 billion per second, a target luminosity of 1.7 x 1039 cm-2 s-1 and provide a polarised beam of electrons at 180 µA – all considered high for an experiment of this kind.

    The scientists were looking for patterns in the detector data that would tell them something about the proton’s weak charge: the strength with which it interacts with electrons via the weak nuclear force. (Its notation is Qweak, hence the experiment’s name.)

    At Qweak, they’re doing this by studying how the electrons are scattered versus their polarisation. The Standard Model (SM) of particle physics, the theory that physicists work with to understand the behaviour of elementary particles, predicts that the number of left- and right-handed electrons scattered should differ by one for every 10 million interactions. If this number is found to be bigger or smaller than usual when measured in the wild, then the Standard Model will be in trouble – much to physicists’ delight.

    SM’s corresponding value for the proton’s weak charge is 0.0708. At Qweak, the value was measured to be 0.0719 ± 0.0045, i.e. between 0.0674 and 0.0764, completely agreeing with the SM prediction. Something’s gotta give – but it’s not going to be the proton’s weak charge for now.

    Paper: Precision measurement of the weak charge of the proton

    Featured image credit: Pexels/Unsplash.

  • Weyl semimetals make way for super optics

    In 2015, materials scientists made an unexpected discovery. In a compound of the metals tantalum and arsenic, they discovered a quasiparticle called a Weyl fermion. A quasiparticle is a packet of energy trapped in a system, like a giant cage of metal atoms, that in some ways moves around and interacts like a particle would. A fermion is a type of elementary particle that makes up matter; it includes electrons. A Weyl fermion, however, is a collection of electrons that behaves as if it is one big fermion – and as if it has no mass.

    In June 2017, physicists reported that they had discovered another kind of Weyl fermion, dubbed a type-II Weyl fermion, in a compound of aluminium, germanium and lanthanum. It differed from other Weyl fermions in that it violated Lorentz symmetry. According to Wikipedia, Lorentz symmetry is the fact that “the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame”.

    Both ‘regular’ and type-II Weyl fermions can do strange things. By extension, the solid substance engineered to be hospitable to Weyl fermions can be a strange thing itself. For example, when an electrical conductor is placed within a magnetic field, the current flowing through it faces more resistance. However, in a conductor conducting electricity using the flow of Weyl fermions, the resistance drops when a magnetic field is applied. When there are type-II Weyl fermions, resistance drops if the magnetic field is applied one way and increases if the field is applied the other way.

    In the case of a Weyl semimetal, things get weirder.

    Crystals are substances whose atoms are arranged in a regular, repeating pattern throughout. They’re almost always solids (which makes LCD displays cooler). Sometimes, this arrangement of atoms carries a tension, as if the atoms themselves were beads on a taut guitar string. If the string is plucked, it begins to vibrate at a particular note. Similarly, a crystal lattice vibrates at a particular note in some conditions, as if thrumming with energy. As the thrum passes through the crystal carrying this energy, it is as if a quasiparticle is making its way. Such quasiparticles are called phonons.

    A Weyl semimetal is a crystal whose phonon is actually a Weyl fermion. So instead of carrying vibrational energy, a Weyl semimetal’s lattice carries electrical energy. Mindful of this uncommon ability, a group of physicists reported a unique application of Weyl semimetals on June 5, with a paper in the journal Physical Review B.

    It’s called a superlens. A more historically aware name is the Veselago’s lens, for the Russian physicist Viktor Veselago, who didn’t create the lens itself but laid the theoretical foundations for its abilities in a 1967 paper. The underlying physics is in fact high-school stuff.

    When light passes through a rarer medium into a denser medium, its path becomes bent towards the normal (see image below).

    Credit: Wikimedia Commons
    Credit: Wikimedia Commons

    How much the path changes depends on the refractive indices of the two mediums. In nature, the indices are always positive, and this angle of deflection is always positive as well. The light ray coming in through the second quadrant (in the image) will either go through fourth quadrant, as depicted, or, if the denser medium is too dense, become reflected back into the third quadrant.

    But if the denser medium has a negative refractive index, then the ray entering from the second quadrant will exit through the first quadrant, like so:

    The left panel depicts refraction when the refraction indices are positive. In the left panel, the 'green' medium has a negative refractive index, causing the light to bend inward. Credit: APS/Alan Stonebraker
    The left panel depicts refraction when the refraction indices are positive. In the left panel, the ‘green’ medium has a negative refractive index, causing the light to bend inward. Credit: APS/Alan Stonebraker

    Using computer simulations developed using Veselago’s insights, the British physicist J.B. Pendry showed in 2000 that such mediums could be used to refocus light diverging from a point. (I highly recommend giving his paper a read if you’ve studied physics at the undergraduate level.

    Credit: APS
    Credit: APS

    This is a deceptively simple application. It stands for much more in the context of how microscopes work.

    A light microscope, of the sort used in biology labs, has a maximum zoom of about 1,500. This is because the microscope is limited by the size of the thing it is using to study its sample: light itself. Specifically, (visible) light as a wave has a wavelength of 200 nanometers (corresponding to bluer colours) to 700 nanometers (to redder colours). The microscope will be blind to anything smaller than these wavelengths, imposing a limit on the size of the sample. So physicists use an electron microscope. As waves, electrons have a wavelength 100,000-times shorter than that of visible-light photons. This allows electron microscopes to magnify objects by 10,000,000-times and probe samples a few dozen picometers wide. But as it happens, scientists are still disappointed: they want to probe even smaller samples now.

    To overcome this, Pendry had proposed in his 2000 study that a material with a negative refractive index could be used to focus light – rather, electromagnetic radiation – in a way that was independent of its wavelength. In 2007, British and American physicists had found a way to achieve this in graphene, which is a two-dimensional, single-atom-thick layer of carbon atoms – but using electrons instead of photons. Scientists have previously noted that some electrons in graphene can flow around the material as if they had no mass. In the 2007 study, when these electrons were passed through a pn junction, a type of junction typically used between semiconductors in electronics, the particles’ path bent inward on the other side as if the refractive index was negative.

    In the June 5 paper in Physical Review B, physicists demonstrated the same phenomenon – using electrons – in a three-dimensional material: a Weyl semimetal. According to them, a stack of two Weyl semimetals can be engineered such that the Weyl fermions from one semimetal compound can enter the other as if the latter had a negative refractive index. With this in mind, Adolfo Grushin and Jens Bardarson write in Physics:

    Current [scanning tunnelling electron microscopes (STMs)] use a sharp metallic tip to focus an electron beam onto a sample. Since STM’s imaging resolution is limited by the tip’s geometry and imperfections, it ultimately depends on the tip manufacturing process, which today remains a specialised art, unsuitable for mass production. According to [the paper’s authors], replacing the STM tip with their multilayer Weyl structure would result in a STM whose spatial resolution is limited only by how accurately the electron beam can be focused through Veselago lensing. A STM designed in this way could focus electron beams onto sub-angstrom regions, which would boost STM’s precision to levels at which the technique could routinely see individual atomic orbitals and chemical bonds.

    This is the last instalment in a loose trilogy of pieces documenting the shape of the latest research on topological materials. You can read the other two here and here.

  • Amorphous topological insulators

    A topological insulator is a material that conducts electricity only on its surface. Everything below, through the bulk of the material, is an insulator. An overly simplified way to understand this is in terms of the energies and momenta of the electrons in the material.

    The electrons that an atom can spare to share with other atoms – and so form chemical bonds – are called valence electrons. In a metal, these electrons can have various momenta, but unless they have a sufficient amount of energy, they’re going to stay near their host atoms – i.e. within the valence band. If they do have energies over a certain threshold, then they can graduate from the valence band to the conduction band, flowing throw the metal and conducting electricity.

    In a topological insulator, the energy gap between the valence band and the conduction band is occupied by certain ‘states’ that represent the material’s surface. The electrons in these states aren’t part of the valence band but they’re not part of the conduction band either, and can’t flow throw the entire bulk.

    The electrons within these states, i.e. on the surface, display a unique property. Their spins (on their own axis) are coupled strongly with their motion around their host atoms. As a result, theirs spins become aligned perpendicularly to their momentum, the direction in which they can carry electric charge. Such coupling staves off an energy-dissipation process called Umklapp scattering, allowing them to conduct electricity. Detailed observations have shown that the spin-momentum coupling necessary to achieve this is present only in a few-nanometre-thick layer on the surface.

    If you’re talking about this with a physicist, she will likely tell you at this point about time-reversal symmetry. It is a symmetry of nature that is said to (usually) ‘protect’ a topological insulator’s unique surface states.

    There are many fundamental symmetries in nature. In particle physics, if a force acts similarly on left- and right-handed particles, it is said to preserve parity (P) symmetry. If the dynamics of the force are similar when it is acting against positively and negatively charged particles, then charge conjugation (C) symmetry is said to be preserved. Now, if you videotaped the force acting on a particle and then played the recording backwards, the force must be seen to be acting the way it would if the video was played the other way. At least if it did it would be preserving time-reversal (T) symmetry.

    Physicists have known some phenomena that break C and P symmetry simultaneously. T symmetry is broken continuously by the second law of thermodynamics: if you videographed the entropy of a universe and then played it backwards, entropy will be seen to be reducing. However, CPT symmetries – all together – cannot be broken (we think).

    Anyway, the surface states of a topological insulator are protected by T symmetry. This is because the electrons’ wave-functions, the mathematical equations that describe some of the particles’ properties, do not ‘flip’ going backwards in time. As a result, a topological insulator cannot lose its surface states unless it undergoes some sort of transformation that breaks time-reversal symmetry. (One example of such a transformation is a phase transition.)

    This laboured foreword is necessary – at least IMO – to understand what it is that scientists look for when they’re looking for topological insulators among all the materials that we have been, and will be able, to synthesise. It seems they’re looking for materials that have surface states, with spin-momentum coupling, that are protected by T symmetry.


    Physicists from the Indian Institute of Science, Bengaluru, have found that topological insulators needn’t always be crystals – as has been thought. Instead, using a computer simulation, Adhip Agarwala and Vijay Shenoy, of the institute’s physics department, have shown that a kind of glass also behaves as a topological insulator.

    The band theory described earlier is usually described with crystals in mind, wherein the material’s atoms are arranged in a well-defined pattern. This allows physicists to determine, with some amount of certainty, as to how the atoms’ electrons interact and give rise to the material’s topological states. In an amorphous material like glass, on the other hand, the constituent atoms are arranged randomly. How then can something as well-organised as a surface with spin-momentum coupling be possible on it?

    As Michael Schirber wrote in Physics magazine,

    In their study, [Agarwala and Shenoy] assume a box with a large number of lattice sites arranged randomly. Each site can host electrons in one of several energy levels, and electrons can hop between neighboring sites. The authors tuned parameters, such as the lattice density and the spacing of energy levels, and found that the modeled materials could exhibit symmetry-protected surface currents in certain cases. The results suggest that topological insulators could be made by creating glasses with strong spin-orbit coupling or by randomly placing atoms of other elements inside a normal insulator.

    The duo’s paper was published in the journal Physical Review Letters on June 8. The arXiv preprint is available to read here. The latter concludes,

    The possibility of topological phases in a completely random system opens up several avenues both from experimental and theoretical perspectives. Our results suggest some new routes to the laboratory realization of topological phases. First, two dimensional systems can be made by choosing an insulating surface on which suitable [atoms or molecules] with appropriate orbitals are deposited at random (note that this process will require far less control than conventional layered materials). The electronic states of these motifs will then [interact in a certain way] to produce the required topological phase. Second is the possibility of creating three dimensional systems starting from a suitable large band gap trivial insulator. The idea then is to place “impurity atoms”, again with suitable orbitals and “friendly” chemistry with the host… The [interaction] of the impurity orbitals would again produce a topological insulating state in the impurity bands under favourable conditions.

    Agarwala/Shenoy also suggest that “In realistic systems the temperature scales over which one will see the topological physics … may be low”, although this is not unusual. However, they don’t suggest which amorphous materials could be suitable topological insulators.

    Thanks to penflip.com and its nonexistent autosave function, I had to write the first half of this article twice. Not the sort of thing I can forgive easily, less so since I’m loving everything else about it.

  • Physicists could have to wait 66,000 yottayears to see an electron decay

    The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

    157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

    On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

    Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

    In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

    But why were scientists making this measurement in the first place?

    Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?

    Silence.

    The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

    The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

    If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

    The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

    However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

    The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

    Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

    In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

    The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

    That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

    Here’s hoping they don’t find what they were looking for?