Peter Higgs, self-promoter

I was randomly rewatching The Big Bang Theory on Netflix today when I spotted this gem:

Okay, maybe less a gem and more a shiny stone, but still. The screenshot, taken from the third episode of the sixth season, shows Sheldon Cooper mansplaining to Penny the work of Peter Higgs, whose name is most famously associated with the scalar boson the Large Hadron Collider collaboration announced the discovery of to great fanfare in 2012.

My fascination pertains to Sheldon’s description of Higgs as an “accomplished self-promoter”. Higgs, in real life, is extremely reclusive and self-effacing and journalists have found him notoriously hard to catch for an interview, or even a quote. His fellow discoverers of the Higgs boson, including François Englert, the Belgian physicist with whom Higgs won the Nobel Prize for physics in 2013, have been much less media-shy. Higgs has even been known to suggest that a mechanism in particle physics involving the Higgs boson should really be called the ABEGHHK’tH mechanism, include the names of everyone who hit upon its theoretical idea in the 1960s (Philip Warren Anderson, Robert Brout, Englert, Gerald Guralnik, C.R. Hagen, Higgs, Tom Kibble and Gerardus ‘t Hooft) instead of just as the Higgs mechanism.

No doubt Sheldon thinks Higgs did right by choosing not to appear in interviews for the public or not writing articles in the press himself, considering such extreme self-effacement is also Sheldon’s modus of choice. At the same time, Higgs might have lucked out and be recognised for work he conducted 50 years prior probably because he’s white and from an affluent country, both of which attributes nearly guarantee fewer – if any – systemic barriers to international success. Self-promotion is an important part of the modern scientific endeavour, as it is with most modern endeavours, even if one is an accomplished scientist.

All this said, it is notable that Higgs was also a conscientious person. When he was awarded the Wolf Prize in 2004 – a prestigious award in the field of physics – he refused to receive it in person in Jerusalem because it was a state function and he has protested Israel’s war against Palestine. He was a member of the Campaign for Nuclear Disarmament until the group extended its opposition to nuclear power as well; then he resigned. He also stopped supporting Greenpeace after they become opposed to genetic modification. If it is for these actions that Sheldon deemed Higgs an “accomplished self-promoter”, then I stand corrected.

Featured image: A portrait of Peter Higgs by Lucinda Mackay hanging at the James Clerk Maxwell Foundation, Edinburgh. Caption and credit: FF-UK/Wikimedia Commons, CC BY-SA 4.0.

Science v. tech, à la Cixin Liu

A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

… technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

(3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

A journey through Twitter and time, with the laws of physics

Say you’re in a dark room and there’s a flash. The light travels outward in all directions from the source, and the illumination seems to expand in a sphere. This is a visualisation of how the information contained in light becomes distributed through space.

But even though this is probably what you’d see if you observed the flash with a very high speed camera, it’s not the full picture. The geometry of the sphere captures only the spatial component of the light’s journey. It doesn’t say anything about the time. We can infer that from how fast the sphere expands but that’s not an intrinsic property of the sphere itself.

To solve this problem, let’s assume that we live in a world with two spatial dimensions instead of three (i.e. length and breadth only, no depth). When the flash goes off in this world, the light travels outward in an expanding circle, which is the two-dimensional counterpart of a sphere. At 1 second after the flash, say the circle is 2 cm wide. After 2 seconds, it’s 4 cm wide. After 3 seconds, it’s 8 cm wide. After 4 seconds, it’s 16 cm wide. And so forth.

If you photographed the circles at each of these moments and put the pictures together, you’d see something like this (not to scale):

And if you looked at this stack of circles from under/behind, you’d see what physicists call the light cone.

Credit: Stib/Wikimedia Commons, CC BY-SA 3.0

The cone is nothing but a stack of circles of increasing diameter. The circumference of each circle represents the extent to which the light has spread out in space at that time. So the farther into the future of an event – such as the flash – you go, the wider the light cone will be.

(The reason we assumed we live in a world of two dimensions instead of three should be clearer now. In our three-dimensional reality, the light cone would assume a four-dimensional shape that can be quite difficult to visualise.)

According to the special theory of relativity, all future light cones must be associated with corresponding past light cones, and light always flows from the past to the future.

To understand what this means, it’s important to understand the cones as exclusionary zones. The diameter of the cone at a specific time is the distance across which light has moved in that time. So anything that moves slower – such as a message written on a piece of paper tied to a rock thrown from A to B – will be associated with a narrower cone between A and B. If A and B are so far apart that even light couldn’t have spanned them in the given time, then B is going to be outside the cone emerging from A, in a region officially called elsewhere.

Now, light is just one way to encode information. But since nothing can move faster than at the speed of light, the cones in the diagram above work for all kinds of information, i.e. any other medium will simply be associated with narrower cones but the general principles as depicted in the diagram will hold.

For example, here’s something that happened on Twitter earlier today. I spotted the following tweet at 9.15 am:

When scrolling through the replies, I noticed that one of Air Vistara’s senior employees had responded to the complaint with an apology and an assurance that it would be fixed.

Taking this to be an admission of guilt, and to an admission of there actually having been a mistake by proxy, I retweeted the tweet at 9.16 am. However, only a minute later, another account discovered that the label of ‘professor’ didn’t work with the ‘male’ option either, ergo the glitch didn’t have so much to do with the user’s gender as much as the algorithm was just broken. A different account brought this to my attention at 9.30 am.

So here we have two cones of information that can be recast as the cones of causality, intersecting at @rath_shyama’s tweet. The first cone of causality is the set of all events in the tweet’s past whose information contributed to it. The second cone of causality represents all events in whose past the tweet lies, such as @himdaughter’s, the other accounts’ and my tweets.

As it happens, Twitter interferes with this image of causality in a peculiar way (Facebook does, too, but not as conspicuously). @rath_shyama published her tweet at 8.02 am, @himdaughter quote-tweeted her at 8.16 am and I retweeted @himdaughter at 9.16 am. But by 9.30 am, the information cone had expanded enough for me to know that my retweet was possibly mistaken. Let’s designate this last bit of information M.

So if I had un-retweeted @himdaughter’s tweet at, say, 9.31 am, I would effectively have removed an event from the timeline that actually occurred before I could have had the information to act on it (i.e., M). The issue is that Twitter doesn’t record (at least not publicly anyway) the time at which people un-retweet tweets. If it had, then there would have been proof that I acted in the future of M; but since it doesn’t, it will look like I acted in the past of M. Since this is causally impossible, the presumption arises that I had the information about M before others did, which is false.

This serves as an interesting commentary on the nature of history. It is not possible for Twitter’s users to remember historical events on its platform in the right order simply because Twitter is memoryless when it comes to one of the actions it allows. As a journalist, therefore, there is a bit of comfort in thinking about the pre-Twitter era, when all newsworthy events were properly timestamped and archived by the newspapers of record.

However, I can’t let my mind wander too far back, lest I stagger into the birth of the universe, when all that existed was a bunch of particles.

We commonly perceive that time has moved forward because we also observe useful energy becoming useless energy. If nothing aged, if nothing grew weaker or deteriorated in material quality – if there was no wear-and-tear – we should be able to throw away our calendars and pretend all seven days of the week are the same day, repeated over and over.+

Scientists capture this relationship between time and disorderliness in the second law of thermodynamics. This law states that the entropy – the amount of energy that can’t be used to perform work – of a closed system can never decrease. It can either remain stagnant or increase. So time does not exist as an entity in and of itself but only seems to as a measure of the increase in entropy (at a given temperature). We say a system has moved away from a point in its past and towards a point in its future if its entropy has gone up.

However, while this works just fine with macroscopic stuff like matter, things are a bit different with matter’s smallest constituents: the particles. There are no processes in this realm of the quantum whose passage will tell you which way time has passed – at least, there aren’t supposed to be.

There’s a type of particle called the B0 meson. In an experiment whose results were announced in 2012, physicists found unequivocal proof that this particle transformed into another one faster than the inverse process. This discrepancy provides an observer with a way to tell which way time is moving.

The experiment also remains the only occasion till date on which scientists have been able to show that the laws of physics don’t apply the same forward and backward in time. If they did, the forward and backward transformations would have happened at the same rate, and an observer wouldn’t have been able to tell if she was watching the system move into the future or into the past.

But with Twitter, it would seem we’re all clearly aware that we’re moving – inexorably, inevitably – into the future… or is that the past? I don’t know.

+ And if capitalism didn’t exist: in capitalist economies, inequality always seems to increase with time.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

A gear-train for particle physics

Clockwork theory has been revived and reformulated by scientists from CERN to solve a difficult problem at the heart of particle physics.

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok

The INO story

The INO’s is a great story but stands unfortunately to become a depressing parable at the moment – the biggest bug yet in a spider’s web spun of bureaucracy and misinformation.

A longer story about the India-based Neutrino Observatory that I’d been wanting to do since 2012 was finally published today (to be clear, I hit the ‘Publish’ button today) on The Wire. Apart from myself, four people worked on it: two amazing reporters, one crazy copy-editor and one illustrator. I don’t mean to diminish the role of the illustrator, especially in setting the piece’s mood quite well, but only that the reporters and the copy-editor did a stupendous job of getting the story from 0 to 1. After all, all I’d had was an idea.

The INO’s is a great story but stands unfortunately to become a depressing parable at the moment – the biggest bug yet in a spider’s web spun of bureaucracy and misinformation. As told on The Wire, the INO is India’s most badass science experiment yet but its inherent sophistication has become its strength and weakness: a strength for being able yield cutting-edge scientific, a weakness for being the ideal target of stubborn activism, unreason and, consequently and understandably, fatigue on the part of the physicists.

Here on out, it doesn’t look like the INO will get built by 2020, and it doesn’t look like it will be the same thing it started out as when it does get built. Am I disappointed by that? Of course – and bad question. I’m rooting for the experiment, yes? I’m not sure – and much better question. In the last few years, in which the project’s plans gained momentum, some unreasonable activists were able to cash in on the Department of Atomic Energy’s generally cold-blooded way of dealing with disagreement (the DAE is funding the INO). At the same time, the INO collaboration wasn’t as diligent as it ought to have been with the environmental impact assessment report (getting it compiled by a non-accredited agency). Finally, the DAE itself just stood back and watched as the scientists and activists battled it out.

Who lost? Take a guess. I hope the next Big Science experiment fares better (I’m probably not referring to LIGO because it has a far stronger global/American impetus while the INO is completely indigenously motivated).

Discussing some motivations behind a particle physics FAQ

Particle physics is in the middle of a quandary. Let’s use it to catch up on all that we’ve missed.

First, there is information. From information, people distill knowledge, and from knowledge, wisdom. Information is available on a lot of topics and in varying levels of detail. Knowledge on topics is harder to find – and even more hard is wisdom. This is because knowledge and wisdom require work (to fact-check and interpret) on information and knowledge, respectively. And people can be selective on what they choose to work on. One popular consequence of such choices is that most people are more aware of business information, business knowledge and business wisdom than they are of scientific information, scientific knowledge and scientific wisdom. This graduated topical awareness reflects in how we produce and consume the news.

struc

News articles written on business issues rarely see fit to delve into historical motivations or explainer-style elucidations because the audience is understood to be better aware of what business is about. Business information and knowledge are widespread and so is, to some extent, business wisdom, and articles can take advantage of conclusions made in each sphere, jumping between them to tease out more information, knowledge and wisdom. On the other hand, articles written on some topics of science – such as particle physics – have to start from the informational level before wisdom can be presented. This places strong limits on how the article can be structured or even styled.

There are numerous reasons for why this is so, especially for topics like particle physics, which I regularly (try to) write on. I’m drawn toward three of them in particular: legacy, complexity and pacing. Legacy is the size of the body of work that is directly related to the latest developments in that work. So, the legacy of the LHC stretches back to include the invention of the cyclotron in 1932 – and the legacy of the Higgs boson stretches back to 1961. Complexity is just that but becomes more meaningful in the context of pacing.

A consequence of business developments being reported on fervently is that there is at least some (understandable) information in the public domain about all stages of the epistemological evolution. In other words, the news reports are apace of new information, new knowledge, new wisdom. With particle physics, they aren’t – they can’t be. The reports are separated by some time, according to when the bigger developments occurred, and in the intervening span of time, new information/knowledge/wisdom would’ve arisen that the reports will have to accommodate. And how much has to be accommodated can be exacerbated by the complexity of what has come before.

struc1

But there is a catch here – at least as far as particle physics is concerned because it is in a quandary these days. The field is wide open because physicists have realised two things: first, that their theoretical understanding of physics is far, far ahead of what their experiments are capable of (since the 1970s and 1980s); second, that there are inconsistencies within the theories themselves (since the late 1990s). Resolving these issues is going to take a bit of time – a decade or so at least (although we’re likely in the middle of such a decade) – and presents a fortunate upside to communicators: it’s a break. Let’s use it to catch up on all that we’ve missed.

The break (or a rupture?) can also be utilised for what it signifies: a gap in information/knowledge. All the information/knowledge/wisdom that has come before is abruptly discontinued at this point, allowing communicators to collect them in one place, compose them and disseminate them in preparation for whatever particle physics will unearth next. And this is exactly what motivated me to write a ‘particle physics FAQ’, published on The Wire, as something anyone who’s graduated from high-school can understand. I can’t say if it will equip them to read scientific papers – but it will definitely (and hopefully) set them on the road to asking more questions on the topic.

Hopes for a new particle at the LHC offset by call for more data

The data pointing at signs of a new particle isn’t good enough to use yet.

At a seminar at CERN on Tuesday, scientists working with the Large Hadron Collider provided the latest results from the particle-smasher at the end of its operations for 2015. The results make up the most detailed measurements of the properties of some fundamental particles made to date at the highest energy at which humankind has been able to study them.

The data discussed during the seminar originated from observations at two experiments: ATLAS and CMS. And while the numbers were consistent between them, neither experimental collaboration could confirm any of the hopeful rumours doing the rounds – that a new particle might have been found. However, they were able to keep the excitement going by not denying some of the rumours either. All they said was they needed to gather more data.

One rumour that was neither confirmed nor denied was the existence of a particle at an energy of about 750 GeV (that’s about 750x the mass of a proton). That’s a lot of mass for a single particle – the heaviest known elementary particle is the top quark, weighing 175 GeV. As a result, it’d be extremely short-lived (if it existed) and rapidly decay into a combination of lighter particles, which are then logged by the detectors.

When physicists find such groups of particles, they use statistical methods and simulations to reconstruct the properties of the particle that could’ve produced them in the first place. The reconstruction shows up as a bump in the data where otherwise there’d have been a smooth curve.

This is the ATLAS plot displaying said bump (look in the area over 750 GeV on the x-axis):

ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN
ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN

It was found in the diphoton channel – i.e. the heavier particle decayed into two energetic photons which then impinged on the ATLAS detector. So why aren’t physicists celebrating if they can see the bump?

Because it’s not a significant bump. Its local significance is 3.6σ (that’s 3.6 times more than the average size of a fluctuation) – which is pretty significant by itself. But the more important number is the global significance that accounts for the look-elsewhere effect. As experimental physicist Tommaso Dorigo explains neatly here,

… you looked in many places [in the data] for a possible [bump], and found a significant effect somewhere; the likelihood of finding something in the entire region you probed is greater than it would be if you had stated beforehand where the signal would be, because of the “probability boost” of looking in many places.

The global significance is calculated by subtracting the effect of this boost. In the case of the 750-GeV particle, the bump stood at a dismal 1.9σ. A minimum of 3 is required to claim evidence and 5 for a discovery.

A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN
A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN

Marumi Kado, the physicist who presented the ATLAS results, added that when the bump was studied across a 45 GeV swath (on the x-axis), its significance went up to 3.9σ local and 2.3σ global. Kado is affiliated with the Laboratoire de l’Accelerateur Lineaire, Orsay.

A similar result was reported by James Olsen, of Princeton University, speaking for the CMS team with a telltale bump at 745 GeV. However, the significance was only 2.6σ local and 1.2σ global. Olsen also said the CMS detector had only one-fourth the data that ATLAS had in the same channel.

Where all this leaves us is that the Standard Model, which is the prevailing theory + equations used to describe how fundamental particles behave, isn’t threatened yet. Physicists would much like it to be: though it’s been able to correctly predict the the existence of many particles and fundamental forces, it’s been equally unable to explain some findings (like dark matter). And finding a particle weighing ~750 GeV, which the model hasn’t predicted so far, could show physicists what could be broken about the model and pave the way for a ‘new physics’.

However, on the downside, some other new-physics hypotheses didn’t find validation. One of the more prominent among them is called supersymmetry, SUSY for short, and it requires the existence of some heavier fundamental particles. Kado and Olsen both reported that no signs of such particles have been observed, nor of heavier versions of the Higgs boson, whose discovery was announced mid-2012 at the LHC. Thankfully they also appended that the teams weren’t done with their searches and analyses yet.

So, more data FTW – as well as looking forward to the Rencontres de Moriond (conference) in March 2016.

New LHC data has more of the same but could something be in the offing?

Run 2 results from the LHC show that QCD is scale-invariant – in keeping with the Standard Model prediction.

Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN
Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

This is CMS’s plot showing the same result:

Source: CMS/CERN
Source: CMS/CERN

When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN
Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

The annoying problem at its heart

The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

… from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

Something in the valley

The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

 

Source: ATLAS/CERN
Source: ATLAS/CERN

That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

Source: arXiv:1512.03371v1
Source: arXiv:1512.03371v1

 

The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.

A new particle to break the Standard Model?

The Wire
July 2, 2015

Scientists at the Large Hadron Collider particle-smasher have unearthed data from an experiment conducted in 2012 that shows signs of a new particle. If confirmed, its discovery could herald a new period of particle physics research.

On June 2, members of the ATLAS detector collaboration uploaded a paper to the arXiv pre-print server discussing the possible sighting of a new particle, which hasn’t been named yet. If the data is to be believed, it weighs as much as about 2,000 protons, making it 12-times heavier than the heaviest known fundamental particle, the top quark. It was spotted in the first place when scientists found an anomalous number of ‘events’ recorded by ATLAS at a particular energy scale, more than predicted by the Standard Model set of theories.

Actually, the Standard Model is more like a collection of principles and rules that dictate the behaviour of fundamental particles. Since the 1960s, it has dominated particle physics research but of late has revealed some weaknesses by not being able to explain the causes behind some of its own predictions. For example, two physicists – Peter Higgs and Francois Englert – used the Standard Model to predict the existence of a Higgs boson in 1964. The particle was found at the LHC in 2012. However, the model has no explanation for why the particle is much lighter than it was thought to be.

If its existence is confirmed, the new probable-particle sighted by ATLAS could force the Standard Model to pave way for a more advanced, and comprehensive, theory of physics and ultimately of nature. However, proving that it exists could take at least a year.

The scientists found the probable-particle in data that was recorded by a detector trained to look for the decays of W and Z bosons. These are two fundamental particles that mediate the weak nuclear force that’s responsible for radioactivity. A particle’s mass is equivalent to its energy, which every particle wants to lose if it has too much of it. So heavier particle often break down into smaller clumps of energy, which manifest as smaller particles. Similarly, at the 2 TeV energy scale, scientists spotted a more-than-predicted clumping of energy that’s often the sign of a new particle, in the W/Z channel.

The chance of the telltale spike in the data belonging to a fluke or impostor event, on the other hand, was 0.00135 (with 0 being ‘no chance’ and 1, certainty) – enough to claim evidence but insufficient to claim a discovery. For the latter, the chances will have to be reduced to at least 0.000000287. In the future, this is what scientists intent on zeroing in on the particle will be gunning for.

The LHC shut in early 2013 for upgrades, waking up in May 2015 to smash protons together at almost twice the energy and detect them with twice the sensitivity as before. The ATLAS data about the new particle was gathered in 2012, when the LHC was still smashing protons at a collision energy of 8 TeV (more than 8,000 proton-masses). In its new avatar, it will be smashing them at 13 TeV and with increased intensity as well. As a result, rarer events like this probable-particle’s formation could happen more often, making it easier for scientists to spot and validate them.

If unfortunately the probable-particle is found to have been something else, particle physicists will be disappointed. Since the LHC kicked off in 2009, physicists have been eager to find some data that will “break” the Standard Model, expose cracks in its foundations, that could be taken advantage of to build a theory that can explain the Higgs boson’s mass or why gravity among the four fundamental forces is so much more weaker than the other three.

The ATLAS team acknowledges a paper from members of the CMS collaboration, also at the LHC, from last year that found similar but weaker signs of the same particle.

All goes well on LHC 2.0's first day back in action

It finally happened! The particle-smasher known as the Large Hadron Collider is back online after more than two years, during which its various components were upgraded to make it even meaner. A team of scientists and engineers gathered at the collider’s control room at CERN over the weekend – giving up Easter celebrations at home – to revive the giant machine so it could resume feeding its four detectors with high-energy collisions of protons.

Before the particles enter the LHC itself, they are pre-accelerated to 450 GeV by the Super Proton Synchrotron. At 11.53 am (CET), the first beam of pre-accelerated protons was injected into the LHC at Point 2 (see image), starting a clockwise journey. By 11.59 am, it’d been reported crossing Point 3, and at 12.01 pm, it was past Point 5. The anxiety in the control room was palpable when an update was posted in the live-blog: “The LHC operators watching the screen now in anticipation for Beam 1 through sector 5-6”.

Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider's first day in action. Credit: CERN
Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider’s first day in action. Credit: CERN

Finally, at 12.12 pm, the beam had crossed Point 6. By 12.27, it had gone a full-circle around the LHC’s particles pipeline, signalling that the pathways were defect-free and ready for use. Already, as and when the beam snaked through a detector without glitches, some protons were smashed into static targets producing a so-called splash of particles like sparks, and groups of scientists erupted in cheers.

Both Rolf-Dieter Heuer, the CERN Director-General, and Frederick Bordry, Director for Accelerators and Technology, were present in the control room. Earlier in the day, Heuer had announced that another beam of protons – going anti-clockwise – had passed through the LHC pipe without any problems, providing the preliminary announcement that all was well with the experiment. In fact, CERN’s scientists were originally supposed to have run these beam-checks a week ago, when an electrical glitch spotted at the last minute thwarted them.

In its new avatar, the LHC sports almost double the energy it ran at, before it shut down for upgrades in early-2013, as well as more sensitive collision detectors and fresh safety systems. For the details of the upgrades, read this. For an ‘abridged’ version of the upgrades together with what new physics experiments the new LHC will focus on, read this. Finally, here’s to another great year for high-energy physics!

All goes well on LHC 2.0’s first day back in action

It finally happened! The particle-smasher known as the Large Hadron Collider is back online after more than two years, during which its various components were upgraded to make it even meaner. A team of scientists and engineers gathered at the collider’s control room at CERN over the weekend – giving up Easter celebrations at home – to revive the giant machine so it could resume feeding its four detectors with high-energy collisions of protons.

Before the particles enter the LHC itself, they are pre-accelerated to 450 GeV by the Super Proton Synchrotron. At 11.53 am (CET), the first beam of pre-accelerated protons was injected into the LHC at Point 2 (see image), starting a clockwise journey. By 11.59 am, it’d been reported crossing Point 3, and at 12.01 pm, it was past Point 5. The anxiety in the control room was palpable when an update was posted in the live-blog: “The LHC operators watching the screen now in anticipation for Beam 1 through sector 5-6”.

Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider's first day in action. Credit: CERN
Beam 1 going from Point 2 to Point 3 during the second run of the Large Hadron Collider’s first day in action. Credit: CERN

Finally, at 12.12 pm, the beam had crossed Point 6. By 12.27, it had gone a full-circle around the LHC’s particles pipeline, signalling that the pathways were defect-free and ready for use. Already, as and when the beam snaked through a detector without glitches, some protons were smashed into static targets producing a so-called splash of particles like sparks, and groups of scientists erupted in cheers.

Both Rolf-Dieter Heuer, the CERN Director-General, and Frederick Bordry, Director for Accelerators and Technology, were present in the control room. Earlier in the day, Heuer had announced that another beam of protons – going anti-clockwise – had passed through the LHC pipe without any problems, providing the preliminary announcement that all was well with the experiment. In fact, CERN’s scientists were originally supposed to have run these beam-checks a week ago, when an electrical glitch spotted at the last minute thwarted them.

In its new avatar, the LHC sports almost double the energy it ran at, before it shut down for upgrades in early-2013, as well as more sensitive collision detectors and fresh safety systems. For the details of the upgrades, read this. For an ‘abridged’ version of the upgrades together with what new physics experiments the new LHC will focus on, read this. Finally, here’s to another great year for high-energy physics!