Root Privileges

Disagree as a matter of principle

The story of dust

What is dust?

It feels ridiculous just asking that question sitting in India. Dust is everywhere. On the roads, in your nose, in your lungs. You lock up your house, go on a month-long holiday and come back, and there’s a fine patina on the table. It’s inside your laptop, driving the cooling fan nuts.

It is also in the atmosphere, in orbit around Earth, in outer space even. It makes up nightmarish storms on Mars. Philip Pullman and Steven Erikson have written books fantasising about it. Dust is omnipresent. (The only dustless places I’ve seen are in stock photos strewn across the internet.)

But what exactly is it, and where did it all come from?



Dust is fine particulate matter. It originates from a tremendous variety of sources. The atmospheric – or aeolian – dust we are so familiar with is composed of small particles sheared off of solid objects. For example, fast-blowing winds carry particles away from loose, dry soil into the air, giving rise to what is called fugitive dust. Another source is the smoke from exhaust pipes.

Yet another is mites of the family Pyroglyphidae. They eat flakes of skin, including those shed by humans, and digest them with enzymes that stay on in their poop. In your house, exposure to their poop (considered a form of dust) can trigger asthma attacks.

Winds lift particulate matter off Earth’s surface and transport them into the troposphere. Once dust gets up there, it acts like an aerosol, trapping heat below it and causing Earth’s surface to warm. Once it collects in sufficient quantities, it begins to affect the weather of regions below it, including rainfall patterns.

Dust particles smaller than 10 microns get into your lungs and affect your respiratory health. They conspire with other pollutants and, taking advantage of slow-moving winds, stagnate over India’s National Capital Region during winter. Particles smaller than 2.5 microns “increase age-specific mortality risk” (source) and send hospital admissions soaring.

There is also dust that travels thousands of kilometres to affect far-flung parts of the world. The “Sahara is the world’s largest source of desert dust”, according to one study. In June this year, the Atlantic Ocean’s tropical area experienced its dustiest period in 15 years when a huge billow blew over from northeast Chad towards the mid-Americas. According to NASA’s Earth Observatory, Saharan dust “helps build beaches in the Caribbean and fertilises soils in the Amazon.”

But speaking of dust that migrates large distances, the transatlantic plume seems much less of a journey than the dust brought to Earth by meteorites that have travelled hundreds of thousands of kilometres through space. As these rocks streak towards the ground, the atmosphere burns off dust-like matter from their surfaces, leaving them hanging in the upper atmosphere.

Atoms released by these particles into the mesosphere drift into the planet’s circulation system, moving from pole to pole over many months. They interact with other particles to leave behind a trail of charged particles. Scientists then use radar to track these particles to learn more about the circulation itself. Some dust particles of extraterrestrial origin also reach Earth’s surface in time. They could carry imprints of physical and chemical reactions they might have experienced in outer space, even from billions of years ago.



In the mid-20th century, researchers used optical data and mathematical arguments to figure that about four million tonnes of meteoric dust slammed into our planet’s atmosphere every year. This was cause for alarm: the figure suggested that the number of meteorites in space was much higher than thought. In turn, the threat to our satellites could have been underestimated. More careful assessments later brought the figure down. A 2013 review states that 10-40 tonnes of meteoric dust slams into Earth’s atmosphere every day.

Still, this figure isn’t low – and its effects are exacerbated by the debris humans themselves are putting in orbit around Earth. The Wikipedia article on ‘space debris’ carefully notes, “As of … July 2016, the United States Strategic Command tracked a total of 17,852 artificial objects in orbit above the Earth, including 1,419 operational satellites.” But only one line later, the number of objects smaller than 1 cm explodes to 170 million.

If a mote of dust weighing 0.00001 kg carried by a 1.4 m/s breeze strikes your face, you are not going to feel anything. This is because its momentum – the product of its mass and velocity – is very low. But when a particle weighing one-hundredth of a gram strikes a satellite at a relative velocity of 1.5 km/s, its momentum jumps a thousandfold. Suddenly, it is able to damage critical components and sensitively engineered surfaces, ending million-dollar, multi-year missions in seconds. One study suggests such particles, if travelling fast enough, can also generate tiny shockwaves.

Before our next stop on the Dust Voyage, let’s take a small break in sci-fi. The mid-century overestimation of meteoric dust flux may have prompted Arthur C. Clarke to write his 1961 novel, A Fall of Moondust. In the story, a cruise-liner called the Selene takes tourists over a basin of superfine dust apparently of meteoric origin. But one day, a natural disaster causes the Selene to sink into the dust, trapping its passengers in life-threatening conditions. After much despair, a rescue mission is mounted when an astronomer spots a heat-trail pointing to the Selene’s location from space, from onboard a spacecraft called Lagrange II.

This name is a reference to the famous Lagrange points. As Earth orbits the Sun, and the Moon orbits Earth, their combined gravitational fields give rise to five points in space where the force acting on an object is just right for it to maintain its position relative to Earth and the Sun. These are called L1, L2, L3, L4 and L5.

A contour plot of the effective potential of the Earth-Sun system, showing the five Lagrange points. Credit: NASA and Xander89, CC BY 3.0

A contour plot of the effective potential of the Earth-Sun system, showing the five Lagrange points. Credit: NASA and Xander89, CC BY 3.0

The Indian Space Research Organisation (ISRO) plans to launch its Aditya satellite, to study the Sun, to L1. This is useful because at L1, Aditya’s view of the Sun won’t be blocked by Earth. However, objects at L1, L2 and L3 have an unstable equilibrium. Without some station-keeping measures now and then, they tend to fall out of their positions.

But this isn’t so with L4 and L5, objects at which remain in a more stable equilibrium. And like anything that’s been lying around for a while, they collect dust.

In the 1950s, the Polish astronomer Kazimierz Kordylewski claimed to have spotted two clouds of dust at L4 and L5. These nebulous collections of particulate matter have since been called Kordylewski clouds. Other astronomers have contested their existence, however. For example, the Hiten satellite could not find any notable dust concentrations in the L4 and L5 regions in 2009. Some argued that Hiten could have missed them because the dust clouds are too spread out.



Only two weeks ago, Hungarian astronomers claimed to have confirmed the presence of dust clouds in these regions (their papers here and here). Because the L4 and L5 regions are of interest for future space missions, astronomers will now have to validate this finding and – if they do – assess the density of dust and the attendant probabilities of threat.

Unlike Kordylewski, who took photographs from a mountaintop, the Hungarian group banked on dust’s ability to polarise light. Light is electromagnetic radiation. Each wave of light consists of an electric and a magnetic field oscillating perpendicular to each other. Imagine various waves of light approaching dust, their electric fields pointed in arbitrary directions. After they strike the dust, however, the particles polarise the waves, causing all of the electric fields to line up with one particular orientation.

When astronomers detect such light, they know that it has encountered dust in its path. Using different instruments and analytical techniques, they can then map the distribution of dust in space through which the light has passed.

This is how, for example, the European Space Agency’s Planck telescope was able to draw up a view of dust around the Milky Way.

A map of dust in and around the Milky Way galaxy, as observed by the ESA Planck telescope. Credit: NASA

A map of dust in and around the Milky Way galaxy, as observed by the ESA Planck telescope. Credit: NASA

That’s billions upon billions of tonnes. Don’t your complaints about dust around the house pale in comparison?

And even at this scale, it has been a nuisance. We don’t know if the galaxy is complaining but Brian Keating certainly did.

In March 2014, Keating and his team at Harvard University’s Centre for Astronomy announced that they had found signs that the universe’s volume had increased by a factor of 1080 in just 10-33 seconds a moment after its birth in the Big Bang. About 380,000 years later, radiation leftover from the Big Bang – called the cosmic microwave background (CMB) – came into being. Keating and co. were using the BICEP2 detector at the South Pole to find imprints of cosmic inflation on the CMB. The smoking gun: light of a certain wavelength polarised by gravitational waves from the early universe.

While the announcement was made with great fanfare – as the “discovery of the decade” and whatnot – their claim quickly became suspect. Data from the Planck telescope and other observatories soon showed that what Keating’s team had found was in fact light polarised by galactic dust. Just like that, their ambition of winning a Nobel Prize came crashing down. Ash to ash, dust to dust.

You probably ask, “Hasn’t it done enough? Can we stop now?” No. We must persevere, for dust has done even more, and we have come so close. For example, look at the Milky Way dust-map. Where could all that dust have come from?

This is where the story of dust takes a more favourable turn. We have all heard it said that we are made of stardust. While it would be futile to try and track where the dust of ourselves came from, understanding dust itself requires us to look to the stars.

The storms on Earth or Mars that stir dust up into the air are feeble breaths against the colossal turbulence of stellar ruination. Stars can die in one of many ways depending on their size. The supernovae are the most spectacular. In a standard Type 1a supernova, an entire white dwarf star undergoes nuclear fusion, completely disintegrating and throwing matter out at over 5,000 km/s. More massive stars undergo core collapse, expelling their outermost layers into space in a death-sneeze before what is left implodes into a neutron star or a black hole.

Any which way, the material released into space forms giant clouds that disperse slowly over millions of years. If they are in the presence of a black hole, then they are trapped in an accretion disk around it, accelerated, heated and energised by radiation and magnetic fields. The luckier motes may float away to encounter other stars, planets or other objects, or even collide with other dust and gas clouds. Such interactions are very difficult to model – but there is no doubt that these they are all essentially driven by the four fundamental forces of nature.

One of them is the force of gravity. When a gas/dust cloud becomes so large that its collective gravitational pull keeps it from dispersing, it could collapse to form another star, and live to see another epoch.



The Cat’s Paw Nebula, imaged here by NASA’s Spitzer Space Telescope, lies between 4,200 lightyears and 5,500 lightyears from Earth. The green areas show regions where radiation from hot stars collided with large molecules and small dust grains called polycyclic aromatic hydrocarbons, causing them to fluoresce. Caption and credit: NASA/JPL-Caltech, Wikimedia Commons

This way, stars are cosmic engines. They keep matter – including dust – in motion. They may not be the only ones to do so but given the presence of stars throughout the (observable) universe, they certainly play a major part. When they are not coming to life or going out of it, their gravitational pull influences the trajectories of other, smaller bodies around them, including comets, asteroids and other spacefaring rocks.

The Solar System itself is considered to have been condensed out of a large disk of dirt and dust made of various elements surrounding a young Sun – a disk of leftovers from the star’s birth. Different planets formed based on the availability of different volumes of different materials at different times. Jupiter is believed to have come first, and the inner planets, including Earth, to have come last.

But no matter; life here had whatever it needed to take root. Scientists are still figuring what those ingredients could have been and their provenance. One theory is that they included compounds of carbon and hydrogen called polycyclic aromatic hydrocarbons, and that they first formed – you guessed it – among the dust meandering through space.

They could then have been ferried to Earth by meteors and comets, perhaps swung towards Earth’s orbit by the Sun’s gravity. When a comet gets closer to a star, for instance, material on its surface begins to evaporate, forming a streaky tail of gas and dust. When Earth passes through a region where the tail’s remnants and other small, rocky debris have lingered, they enter the atmosphere as a meteor shower.

Dust really is everywhere, and it seldom gets the credit it is due. It has been and continues to be a pesky part of daily life. However, unlike our search thus far for extraterrestrial companionship, we are not alone in feeling beset by dust.

The Wire
November 10, 2018

NatGeo clickbait

National Geographic article published on November 6 carried a surprising headline:

Earth has two extra, hidden ‘moons’

The lede followed through:

Earth’s moon may not be alone. After more than half a century of speculation and controversy, Hungarian astronomers and physicists say they have finally confirmed the existence of two Earth-orbiting “moons” entirely made of dust.

This sounds strange because there has been little else in the news about dust-moons in the last few years. No major discoveries are made in one instant, and can often be anticipated many years in advance through discussions among scientists. However, the rest of the article put paid to the doubt.

The ‘dusty moons’ National Geographic alludes to are in fact the Kordylewski dust clouds. Late last month, a group of Hungarian astronomers confirmed the presence of these clouds, located in two different directions at about the same distance Moon is from Earth.

Astronomers have been debating the existence of these clouds since the 1950s. In that decade, an astronomer named Kazimierz Kordylewski climbed a mountain and photographed parts of the night sky where these clouds had been predicted by other astronomers before him to exist. The dust clouds have since been called Kordylewski clouds in his honour.

However, confirming their presence has taken so long even though they’re so close to Earth because of their brightness – or lack of it. They are too faint to spot because the stars in their background far outshine them, even at this distance. But they aren’t completely obscured either: they reflect sunlight in feeble amounts, giving themselves away to the persistent observer.

Although Kazimierz Kordylewski found the dust clouds this way, the Hungarian group was more sophisticated. According to their two published papers (here and here), they took advantage of dust’s ability to polarise light. Waves of light are in fact waves of electric and magnetic fields undulating through space at right angles to each other.

The electric fields of different waves point in different directions. But when they hit a dust particle, they get polarised: the electric fields all line up. This is how sunglasses work: the lenses are filters that don’t let light of certain polarisations pass through, cutting glare.

Like all astronomical discoveries, their finding will have to be validated by independent observers before the community reaches a consensus. But in the meantime, the claimed discovery is a matter of concern because of where the Kordylewski clouds are located: at two Lagrange points.

The Lagrange – or libration – points are places in space where the gravitational fields of the Sun, Moon and Earth tug at each other such that an object at that point will be in an Earth-synchronous orbit around the Sun.

Scientists like stationing satellites at these points because they can stay in orbit with much less fuel spent than if they were stationed elsewhere. However, now we (may) know the Kordylewski clouds are located at the points labelled L4 and L5. This means satellites stationed there will have to carry protective shielding. Otherwise, dust particles could  damage sensitive instruments and end the mission before its time.

However, the Kordylewski clouds can’t be classified as moons, although they can be as natural satellites. Judit Slíz-Balogh, a coauthor of the current study and an astronomer at the Eötvös Loránd University, calls them “pseudo-satellites”. The distinction is important because, even when bracketed between single- or double-quotes, the label of moon can’t be applied to a dust cloud.

The International Astronomical Union (IAU), which decides the meaning of astronomical terms like planet, star, etc., defines a moon only as a planet’s natural satellite. However, that isn’t license to call every natural satellite a moon. (In fact, one of the definitions of a planet would make our Moon a planet, too.)

But a size-based organisational paradigm would imply that an object much smaller than the moon would have to be called a moonlet. For example: Saturn’s moon Pan, which is 35 km at its widest. Something even smaller will have to make do with the catch-all label ‘particles’. Then again, the paradigm falters with the overall form of the satellite. For another example: the dust, ice and rocks that make up Saturn’s rings are called ‘ring particles’ even though some of them weigh a few quintals.

Carolyn Collins Petersen, a former member of the Hubble Space Telescope instrument team, wrote for ThoughtCo. earlier this year, “There is no official definition of ‘moonlet’ and ‘ring particle’ by the … IAU. Planetary scientists have to use common sense to distinguish between these objects.”

Importantly, it would be counterproductive to argue that anything goes because there is no technical definition. To the contrary, especially with science communication, it is important to use words whose meanings are generally agreed upon. ‘Natural satellites of dust’ would have helped that cause better than ‘”Moons” made of dust’.

The Wire
November 9, 2018

Let the arrogators write

Bora Zivkovic, the former ‘blogfather’ of the Scientific American blogs network, said it best: journalists are temporary experts. Reporters have typically got a few days to write something up on which scientists have been working for years, if not decades. They flit from paper to paper, lab to lab; without the luxury of a beat, they often cover condensed matter physics one day, spaceflight the next, ribosomes the day after, and exomoons after that. Over time, they’re the somewhat-jacks of many trades, but there’s only one that they’re really trying to master: story-telling.

The editors they work with to have these stories published are also somewhat-jacks in their own right. Many of them will have been reporters, probably still are from time to time, and further along the road (by necessity) to understanding what will get stories read.

However, I’ve often observed a tendency among many of the scientists I work with to trivialise these proficiencies, as if they’re products of a lesser skill, a lesser perseverance even. There have even been one or two so steeped in the notion that science reporters and editors wouldn’t have jobs if they hadn’t undertaken their pursuits of truths that they treat editors with naked disdain. Some others are less contemptuous but still aver that journalists are at best adjacent to reality, and lower on some imagined hierarchy as a result.
If these claims don’t immediately seem ludicrous to you, then you’re likely choosing to not see why.

First: If a person in any profession believes that it’s easy to reach the masses, and cites Facebook and Twitter as proof, then it’s not that they don’t know how journalism works. It’s that they don’t know what journalism is as well as are professing ignorance of their personal definition being wrong. The fourth estate is responsible for keeping democracy functional. It’s not as simple as putting all available information in the public domain or breaking complex ideas down to digestible tidbits. It’s about figuring out how “write a story people will like reading” is tied to “speak truth to power”.

Second: I’m not going to say reporting and editing engage the mind as much as science does because I wouldn’t know how I’d go about proving such a thing. Axiomatically, I will say that those who believe reporting and editing are somehow ‘softer’ therefore ‘lesser’ pursuits (machismo?) or that they’re less engaging/worthwhile are making the same mistake. There’s no way to tell. There’s also no admission of the alternative that editors and reporters – by devoting themselves to deceptively simple tasks like stating facts and piecing narratives together – are able to find greater meaning, agency and purpose in them than the scientist is able to comprehend.

Third: This tendency to debase communication and its attendant skills is bizarre considering the scientist himself intends to communicate (and it’s usually a ‘him’ that’s doing the debasing). If I had to take a guess, I’d say these beliefs exist because they’re proxies for a subconscious reluctance to share the power that is their knowledge, and the expression of such beliefs a desperate attempt to exert control over what they may believe is rightfully theirs. There’s some confidence in such speculation as well because I actually know one scientist who believes scientists attempting to communicate their work are betraying their profession. But that story’s for another day.

All these reasons together is why I’d ask the arrogators to write more for news outlets instead of asking them to stop. It’s not that we get to cut off their ability to reach the masses – that could worsen the sense of entitlement – but that we’ve an opportunity to chamfer their privilege upon the whetstone of public engagement. This after all is one of the purposes of journalism. It works even when we’re letting the powerful write instead of the powerless because its strength lies as much in the honest conduct of it as its structure. The plain-jane conveyance of information is a very small part of it all.

INO can keep env. ministry clearance

The India-based Neutrino Observatory (INO), a mega science project stranded in the regulatory boondocks since the Centre okayed it in 2012, received a small shot in the arm earlier this week.

On November 2, the National Green Tribunal (NGT) dismissed an appeal by activists against the environment ministry’s clearance for the project.

The activists had alleged that the environment ministry lacked the “competence” to assess the project and that the environmental clearance awarded by the ministry was thus invalid. But the principal bench of the NGT ruled that “it was correct on the part of the EAC and the [ministry] to appraise the project at their level”.

The INO is a Rs-1,500-crore project that aims to build and install a 50,000-tonne detector inside a mountain near Theni, Tamil Nadu, to study natural elementary particles called neutrinos.

The environment ministry issued a clearance in June 2011. But the NGT held it in abeyance in March 2017 and asked the INO project members to apply for a fresh clearance. G. Sundarrajan, the head of an NGO called Poovulagin Nanbargal that has been opposing the INO, also contended that the project was within 5 km of the Mathikettan Shola National Park. So the NGT also directed the INO to get an okay from the National Board for Wildlife.

Poovulagin Nanbargal (Tamil for ‘Friends of Flora’) and other activists have raised doubts about the integrity of the rock surrounding the project site, damage to water channels in the area and even whether nuclear waste will be stored onsite. However, all these concerns have been allayed or debunked by the collaboration and the media. (At one point, former president A.P.J. Abdul Kalam wrote in support of the project.)

Sundarrajan has also been supported by Vaiko, leader of the Marumalarchi Dravida Munnetra Kazhagam party.

In June 2017, INO members approached the Tamil Nadu State Environmental Impact Assessment Authority. After several meetings, it stated that the environment ministry would have to assess the project in the applicable category.

The ministry provided the consequent clearance in March 2018. Activists then alleged that this process was improper and that the ministry’s clearance would have to be rescinded. The NGT struck this down.
As a result, the INO now has all but one clearance – that of the National Board for Wildlife – it needs before the final step: to approach the Tamil Nadu Pollution Control Board for the final okay. Once that is received, construction of the project can be underway.

Once operational, the INO is expected to tackle multiple science problems. Chief among them is the neutrino mass hierarchy: the relative masses of the three types of neutrinos, an important yet missing detail that holds clues about the formation and distribution of galaxies in the universe.

The Wire
November 4, 2018

Doubts cast on LIGO results… again

A group of Danish physicists that doubted last year whether two American experiments to detect gravitational waves had actually confused noise for signal has reared its head once more. New Scientist reported earlier this week that the group, from the Niels Bohr Institute in Copenhagen, independently analysed the experimental data and found the results to be an “illusion” instead of the actual thing.

The twin Laser Interferometer Gravitational-wave Observatories (LIGO), located in the American states of Washington and Louisiana, made the world’s first direct detection of gravitational waves in September 2015. The labs behind the observatories announced the results in February 2016 after multiple rounds of checking and rechecking. The announcement bagged three people instrumental in setting up LIGO the Nobel Prize for physics in 2017.

However, in June that year, Andrew Jackson, the spokesperson for the Copenhagen group, first raised doubts about LIGO’s detection. He claimed that because of the extreme sensitivity of LIGO to noise, and insufficient efforts on scientists’ part to eliminate such noise from their analysis, what the ‘cleaned-up’ data shows as signs of gravitational waves is actually an artefact of the analysis itself.

As David Reitze, LIGO executive director, told Ars Technica, “The analysis done by Jackson et al. looks for residuals after subtracting the best fit waveform from the data. Because the subtracted theoretical waveforms are not a perfect reconstruction of the true signal, … [they] find residuals at a very low level and claim that we have instrumental artefacts that we don’t understand. So therefore he believes that we haven’t detected a gravitational wave.”

Scientists working with LIGO had rebutted Jackson’s claims back then. The fulcrum of their argument rest on the fact that LIGO data is very difficult to analyse and that Jackson and co. had made some mistakes in their independent analysis. They also visited the Niels Bohr Institute to work with Jackson and his team, and held extended discussions with him in teleconferences, according to Ars Technica. But Jackson hasn’t backed down.

LIGO detects gravitational waves using a school-level physics concept called interference. When two light waves encounter each other, two things happen. In places where a crest of one wave meets a crest of the other, they combine to form a bigger crest; similarly for troughs. Where a crest of one wave meets the trough of another, they cancel each other. As a result, when the recombined wave hits a surface, the viewer sees a fringe pattern: alternating bands of light and shadow. The light areas denote where one crest met another and the shadow, where one crest met a trough.

Each LIGO detector consists of two kilometre-long corridors connected like an ‘L’ shape. A machine at the vertex fires two laser beams down each corridor. The beams bounce off a mirror at the end come back towards the vertex to interfere with each other. The lasers are tuned such that, in the absence of a gravitational wave, they reconvene with destructive interference: full shadow.

When a gravitational wave passes through LIGO, distorting space as it does, one arm of LIGO becomes shorter than the other for a fleeting moment. This causes the laser pulse in that corridor to reach sooner than the other and there’s a fringed interference pattern. This alerts scientists to the presence of a gravitational wave. The instrument is so sensitive that it can detect distortions in space as small as one-hundredth the diameter of a proton.

At the same time, because it’s so sensitive, LIGO also picks up all kinds of noise in its vicinity, including trucks passing by a few kilometres away and little birds perching on the detector housing. So analysts regularly have dry-runs with the instrument to understand what noise in the signal looks like. When they do detect a gravitational wave, they subtract the noise from the data to see what the signal looks like.

But this is a horribly oversimplified version. Data analysts – and their supercomputers – take months to clean up, study and characterise the data. The LIGO collaboration also subjects the final results to multiple rechecks to prevent premature or (inadvertent) false announcements. The analysts’ work banks on its own field of study called numerical relativity.

Since the September 2015 detection, LIGO has made five more gravitational wave detections. Some of these have been together with other observatories in the world. Such successful combined efforts lend further credence to LIGO’s claims. The prime example of this was the August 2017 discovery of gravitational waves from a merger of neutron stars in a galaxy 130-140 million lightyears away. Over 70 other observatories and telescopes around the world joined in the effort to study and characterise the merger.

This is why LIGO scientists have asserted that when Jackson claims they’ve made a mistake, their first response is to ask his team to recheck its calculations. But though their response hasn’t changed the second time Jackson and co. have hit back, a better understanding of the problem has emerged: Is LIGO doing enough to help others make sense of its data?

For one, the tone of some of these responses hasn’t gone down well. Peter Coles, a theoretical cosmologist at the Cardiff and Maynooth Universities, wrote on his blog:

I think certain members – though by no means all – of the LIGO team have been uncivil in their reaction to the Danish team, implying that they consider it somehow unreasonable that the LIGO results such be subject to independent scrutiny. I am not convinced that the unexplained features in the data released by LIGO really do cast doubt on the detection, but unexplained features there undoubtedly are. Surely it is the job of science to explain the unexplained?

From LIGO’s perspective, the fundamental issue is that their data – a part of which is in the public domain isn’t easily understood or processed. And Jackson believes LIGO could be hiding some mistakes behind this curtain of complexity.

His and his group’s opinion, however, remains in the minority. According to the New Scientist report itself, many scientists who sided with Jackson don’t think LIGO has messed up but that it needs to do more to help independent experts understand its data better. Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies, wrote on her blog on November 1:

… the issue for me was that the collaboration didn’t make an effort helping others to reproduce their analysis. They also did not put out an official response, indeed have not done so until today. I thought then – and still think – this is entirely inappropriate of a scientific collaboration. It has not improved my opinion that whenever I raised the issue LIGO folks would tell me they have better things to do.

The LIGO collaboration finally issued a statement on November 1. Excerpt:

The features presented in Creswell et al. arose from misunderstandings of public data products and the ways that the LIGO data need to be treated. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results. We are preparing a paper that will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their source properties.

A third LIGO instrument is set to come up by 2022, this one in India. The two American detectors and other gravitational-wave observatories are all located on almost the same plane in the northern hemisphere. This limits the network’s ability to pinpoint the location of sources of gravitational waves in the universe. A detector in India would solve this problem because it would be outside the plane.
Indian scientists have also been a significant part of LIGO’s effort to study gravitational waves. Thirty-seven of them were part of a larger group of physicists awarded the Special Breakthrough Prize for fundamental physics in 2016.

The Wire
November 4, 2018

Expertise’s place

Over 1,600 scientists have signed a letter of protest addressed to the White House against its proposed definition of ‘gender’ that purportedly disidentifies transgender and intersex people. According to a press statement issued alongside the letter,

The letter was a grassroots effort. Immediately following the publication of the New York Times article about the administration’s proposal, with its “grounded in science” claim, scientists began voicing their objections on social media. Twenty-two biologists and other scientists in related fields planned and wrote the letter collaboratively.

The letter asks for the administration to withdraw the draft policy and for the petitioners’ “elected representatives to oppose its implementation”. It has been signed by over 1,600 people working as “biologists, geneticists, psychologists, anthropologists, physicians, neuroscientists, social scientists, biochemists, mental health service providers,” and in other fields.

However, subject expertise has little role to play in the context of the letter, and certainly shouldn’t let the Trump administration off the hook simply because it believes only ‘scientific things’ are entitled to legal protection.

If technical expertise were really necessary to disabuse the Trump administration of its misbelief that gender is a biological construct, the experts at the forefront should have included those qualified to comment meaningfully on how people build and negotiate gender. But even this wouldn’t save the letter from its principal problem: it seems to be almost exclusively offended by the Trump administration’s use of the phrase “grounded in science” over anything else, and devotes three paragraphs underlining the lack of empirical knowledge on this count. This is problematic.

In transgender individuals, the existence and validity of a distinct gender identity is supported by a number of neuroanatomical studies. Though scientists are just beginning to understand the biological basis of gender identity, it is clear that many factors, known and unknown, mediate the complex links between identity, genes, and anatomy.

In intersex people, their genitalia, as well as their various secondary sexual characteristics, can differ from what clinicians would predict from their sex chromosomes. In fact, some people will live their entire lives without ever knowing that they are intersex. The proposed policy will force many intersex people to be legally classified in ways that erase their intersex status and identity, as well as lead to more medically unnecessary and risky surgeries at birth. Such non-consensual gender assignment and surgeries result in increased health risks in adulthood and violate intersex people’s right to self-determination.

Millions of Americans identify as transgender or gender non-conforming, or have intersex bodies, and are at increased risk of physical and mental health disorders resulting from discrimination, fear for personal safety, and family and societal rejection. Multiple standards of health care for transgender and intersex people emphasise that recognising an individual’s self-identified gender, not their external genitalia or chromosomes, is the best practice for providing evidence-based, effective, and lifesaving care. Our best available evidence shows that affirmation of gender identity is paramount to the survival, health, and livelihood of transgender and intersex people.

A socio-cultural description of some of the ways in which Americans interpret gender, the challenges they may face and what they believe could be the appropriate way to address them are all conspicuous by absence. People are not rallying to this cause because science doesn’t yet know; that would be disingenuous. Instead, they are speaking up because the cultural experience of gender is missing from the White House’s articulation.

Finally, more than following Trump’s draft policy into its hole of cultural elision, the letter itself seems to fail to distinguish between sex and gender. It says:

The relationship between sex chromosomes, genitalia, and gender identity is complex, and not fully understood. There are no genetic tests that can unambiguously determine gender, or even sex.

The relationship between sex chromosomes and genitalia is much better understood than the relationship between the two and gender identity. Further, sex can indeed be determined to a large extent by genetic tests. It is gender that is harder to associate with one’s genes because it is a social/cultural/political construct and genes aren’t its sole determinants. Sex is entirely biological and doctors around the world routinely determine the sex of newborns by studying their chromosomes.

The following para also notes:

In transgender individuals, the existence and validity of a distinct gender identity is supported by a number of neuroanatomical studies.

It is doubtful if these studies demonstrate causation together with correlation.

Notwithstanding the legal protections afforded to people of non-binary gender and the terms of their provision, the letter would have benefited by calling the policy out for framing it as an insular problem of science, not putting up an equally insular counter-argument and by being more wary of the language it employs to defend its stance. But as it stands, it proves to be by itself controversial.

New anomaly at the LHC

Has new ghost particle manifested at the Large Hadron Collider?, The Guardian, October 31:

Scientists at the Cern nuclear physics lab near Geneva are investigating whether a bizarre and unexpected new particle popped into existence during experiments at the Large Hadron Collider. Researchers on the machine’s multipurpose Compact Muon Solenoid (CMS) detector have spotted curious bumps in their data that may be the calling card of an unknown particle that has more than twice the mass of a carbon atom.

The prospect of such a mysterious particle has baffled physicists as much as it has excited them. At the moment, none of their favoured theories of reality include the particle, though many theorists are now hard at work on models that do. “I’d say theorists are excited and experimentalists are very sceptical,” said Alexandre Nikitenko, a theorist on the CMS team who worked on the data. “As a physicist I must be very critical, but as the author of this analysis I must have some optimism too.”

Senior scientists at the lab have scheduled a talk this Thursday at which Nikitenko and his colleague Yotam Soreq will discuss the work. They will describe how they spotted the bumps in CMS data while searching for evidence of a lighter cousin of the Higgs boson, the elusive particle that was discovered at the LHC in 2012.

This announcement – of a possibly new particle weighing about 28 GeV – is reminiscent of the 750 GeV affair. In late 2015, physicists spotted an anomalous bump in data collected by the LHC that suggested the existence of a previously unknown particle weighing about 67-times as much as the carbon atom. The data wasn’t qualitatively good enough for physicists to claim that they had evidence of a new particle, so they decided to get more.

This was December. By August next year (2016), before the new data was out, theoretical physicists had written and published over 500 papers on the arXiv preprint server on what the new particle could be and how theoretical models could have to be changed to make room for it. But at the 38th International Conference on High-Energy Physics, LHC scientists unveiled the new data said that the anomalous bump in the data had vanished and that what physicists had seen earlier was likely a random fluctuation in lower quality observations.

The new announcement of a 28 GeV particle seems set for a similar course of action. I’m not pronouncing that no new particle will be found – that’s for physicists to determine – but only writing in defence of those who would cover this event even though it seems relatively minor and like history’s repeating itself. Anomalies like these are worth writing about because of the Standard Model of particle physics, which has been historically so good at making predictions about particles’ properties that even small deviations from it are big news.

At the same time, it’s big news in a specific context with a specific caveat: that we might be chasing an ambulance here. For example, The Guardian only says that the anomalous signal will have to be verified by other experiments, leaving out the part where the signal that LHC scientists already have is pretty weak (4.2σ and 2.9σ (both local as opposed to global) in two tests in the 8 TeV data and 2.0σ and 1.4σ deficit in the 13 TeV data). It also doesn’t mention the 750 GeV affair even though the two narratives already appear to be congruent.

If journalists leave such details out, I’ve a feeling they’re going to give their readers the impression that this announcement is more significant than it actually is. (Call me a nitpicker but I’m sure being accurate will allow engaged readers to set reasonable expectations about what to expect in the story’s next chapter as well as keep them from becoming desensitised to journalistic hype.)

Those who’ve been following physics news will be aware of the ‘nightmare scenario’ assailing particle physics, and in this context there’s value in writing about what’s keeping particle physicists occupied – especially in their largest, most promising lab.

But thanks to the 750 GeV affair, most recently, we also know that what any scientist or journalist says or does right now is moot until LHC scientists present sounder data + confirmation of a positive/negative result. And journalists writing up these episodes without a caveat that properly contextualises where a new anomaly rests on the arc of a particle’s discovery will be disingenuous if they’re going to justify their coverage based on the argument that the outcome “could be” positive.

The outcome could be negative and we need to ensure the reader remembers that. Including the caveat is also a way to do that without completely obviating the space for a story itself.

57 years after the mad bomb

Fifty-seven years ago on October 30, the most powerful nuclear weapon in the history of nukes was detonated by the Soviets. The device was called the RDS-220 by the Soviet Union and nicknamed Tsar Bomba – ‘King of Bombs’ – by the US. It had a blast yield of 50 megatonnes (MT) of TNT, making it 1,500-times more powerful than the Hiroshima and Nagasaki bombs together.

The detonation was conducted off the island of Novaya Zemlya, four km above ground. The Soviets had built the bomb to one-up the US and followed Nikita Khrushchev’s challenge on the floor of the UN General Assembly a year earlier, promising to teach the US a lesson (the B41 nuke used by the US in the early 1960s had a yield of half as much).

But despite its intimidating features and political context, the RDS-220 yielded one of the cleanest nuclear explosions ever and was never tested again. The Soviets had originally intended for the RDS-220 to have a yield equivalent to 100 MT of TNT, but decided against it because of two reasons.

First: it was a three-stage nuke and weighed 27 tonnes and was only a little smaller than an American school bus. As a result, it couldn’t be delivered using an intercontinental ballistic missile. Maj. Andrei Durnovtsev, a decorated soldier in the Soviet Air Force, modified a Tu-95V bomber to carry the bomb and also flew it on the day of the test. The bomb had been fit with a parachute (whose manufacture disrupted the domestic nylon hosiery industry) so that between releasing the bomb and its detonation, the Tu-95V would have enough time to fly 45 km away from the test site. But even then, the bomb’s 100 MT yield would’ve meant Durnovtsev and his crew would’ve nearly certainly been killed.

To improve the odds of survival to 50%, engineers reduced the yield from 100 MT to 50 MT, and which they did by replacing a uranium-238 tamper around the bomb with a lead tamper. In a thermonuclear weapon – which the RDS-220 was – a nuclear fusion reaction is set off inside a container that is explosively compressed by a nuclear fission reaction going off on the outside.

However, the Soviets took it a step further with Tsar Bomba: the first stage nuclear fission reaction set off a second stage nuclear fusion reaction, which then set off a bigger fusion reaction in the third stage. The original design also included a uranium-238 tamper on the second and third stages, such that fast neutrons emitted by the fusion reaction would’ve kicked off a series of fission reactions accompanying the two stages. Utter madness. The engineers switched the uranium-238 tamper and put in a lead-208 tamper. Lead-208 can’t be fissioned in a chain reaction and as such has a remarkably low efficiency as a nuclear fuel.

The second reason the RDS-220’s yield was reduced pre-test was because of the radioactive fallout. Nuclear fusion is much cleaner than nuclear fission as a process (although there are important caveats for fusion-based power generation). If the RDS-220 had gone ahead with the uranium-238 tamper on the second and third stages, then its total radioactive fallout would’ve accounted for fully one quarter of all the radioactive fallout from all nuclear tests in history, gently raining down over Soviet Union territory. The modification resulting in 97% of the bomb’s yield being in the form of emissions from the fusion reactions alone!

One of the more important people who worked on the bomb was Andrei Sakharov, a noted nuclear physicist and later dissident from the Soviet Union. Sakharov is given credit for developing a practicable design for the thermonuclear weapon, an explosive that could leverage the fusion of hydrogen atoms. In 1955, the Soviets, thanks to Sakharov’s work, won the race to detonate a hydrogen bomb that’d been dropped from an airplane, whereas until then the Americans had detonated hydrogen charges placed on the ground.

It was after the RDS-220 test in 1961 that Sakharov began speaking out against nuclear weapons and the nuclear arms race. He would go on to win the Nobel Peace Prize in 1975. One of his important contributions to the peaceful use of nuclear power was the tokamak, a reactor design he developed with Igor Tamm to undertake controlled nuclear fusion and so generate power. The ITER experiment uses this design.

Does the neutrino sector violate CP symmetry?

The universe is supposed to contain equal quantities of matter and antimatter. But this isn’t the case: there is way more matter than antimatter around us today. Where did all the antimatter go? Physicists trying to find the answer to this question believe that the universe was born with equal amounts of both. However, the laws of nature that subsequently came into effect were – and are – biased against antimatter for some reason.

In the language of physics, this bias is called a CP symmetry violation. CP stands for charge-parity. If a positively charged particle is substituted with its negatively charged antiparticle and if its spin is changed to its mirror image, then – all other properties being equal – any experiments performed with either of these setups should yield the same results. This is what’s called CP symmetry. CPT – charge, parity and time – symmetry is one of the foundational principles of quantum field theory.

Physicists try to explain the antimatter shortage by studying CP symmetry violation because one of the first signs that the universe has a preference for one kind of matter over the other emerged in experiments testing CP symmetry in the mid-20th century. The result of this extensive experimentation is the Standard Model of particle physics, which makes predictions about what kinds of processes will or won’t exhibit CP symmetry violation. Physicists have checked these predictions in experiments and verified them.

However, there are a few processes they’ve been confused by. In one of them, the SM predicts that CP symmetry violation will be observed among particles called neutral B mesons – but it’s off about the extent of violation.

This is odd and vexing because as a theory, the SM is one of the best out there, able to predict hundreds of properties and interactions between the elementary particles accurately. Not getting just one detail right is akin to erecting the perfect building only to find the uniformity of its design undone by a misalignment of a few centimetres. It may be fine for practical purposes but it’s not okay when what you’re doing is building a theory, where the idea is to either get everything right or to find out where you’re going wrong.

But even after years of study, physicists aren’t sure where the SM is proving insufficient. The world’s largest particle physics experiment hasn’t been able to help either.

Mesons and kaons

A pair of neutral B mesons can decay into two positively charged muons or two negatively charged muons. According to the SM, the former is supposed to be produced in lower amounts than the latter. In 2010 and 2011, the Dø experiment at Fermilab, Illinois, found that there were indeed fewer positive dimuons being produced – but there was sufficient evidence that the number was off by 1%. Physicists believe that this inexplicable deviation could be the result of hitherto undiscovered physical phenomena interfering with the neutral B meson decay process.

This discovery isn’t the only one of its kind. CP violation was first discovered in processes involving particles called kaons in 1964, and has since been found affecting different types of B mesons as well. And just the way some processes violate CP symmetry more than the theory says they should, physicists also know of other processes that don’t violate CP symmetry even though the theory allows them to do so. These are associated with the strong nuclear force and this difficulty is called the strong CP problem – one of the major unsolved problems of physics.

It is important to understand which sectors, i.e. groups of particles and their attendant processes, violate CP symmetry and which don’t because physicists need to put all the facts they can get together to find patterns in them, seeds of theories that can explain how the creation of antimatter at par with matter was aborted at the cosmic dawn. This in turn means that we keep investigating all the known sectors in greater detail until we have something that will allow us to look past the SM unto a more comprehensive theory of physics.

It is in this context that in the last few years, another sector has joined this parade: the neutrinos. Neutrinos are extremely hard to trap because they interact with other particles only via the weak nuclear force, which is much weaker than the name suggests. Though a few trillion neutrinos will pass through your body in your lifetime, maybe three will interact with the atoms in your body. To surmount this limitation, physicists and engineers have built very large detectors to study them as they zoom in from all directions: outer space, from inside Earth, from the Sun, etc.

Neutrinos exhibit another property called oscillations. There are three types or flavours of neutrinos – called electron, muon and tau (note: an electron neutrino is different from an electron). Neutrinos of one flavour can transform into neutrinos of another flavour at a rate predicted by the SM. The T2K experiment in Japan has been putting this to the test. On October 24, it reported via a paper in the journal Physical Review Letters that it had found signs of CP symmetry violation in neutrinos as well.

A new sector

If neutrinos obeyed CP symmetry, then muon neutrinos should be transforming into electron neutrinos – and muon antineutrinos should be transforming into electron antineutrinos – at the rates predicted by the SM. But the transformation rate seems to be off. Physicists from T2K had reported last year that they had weak evidence of this happening. According to the October 24 paper, the evidence this year is less weak almost by half – but still not strong enough to shake up the research community.

While the trend suggests that T2K will indeed find that the neutrinos sector violates CP symmetry as it takes more data, enough experiments in the past have forced physicists to revisit their models after more data punctured this or that anomaly present in a smaller dataset. We should just wait and watch.

But what if neutrinos do violate CP symmetry? There are major implications, and one of them is historical.

When the C, P and T symmetries were formulated, physicists thought they were each absolute: that physical processes couldn’t violate any of them. But in 1956, it was found that the weak nuclear force does not obey C or P symmetries. Physicists were shaken up but not for long; they quickly rallied and fronted an idea 1957 that C or P symmetries could be broken but both together constituted a new and absolute symmetry: CP symmetry. Imagine their heartbreak when James Cronin and Val Fitch found evidence for CP symmetry violation only seven years later.

As mentioned earlier, neutrinos interact with other particles only via the weak nuclear force – which means they don’t abide by C or P symmetries. If within the next decade we find sufficient evidence to claim that the neutrinos sector doesn’t abide by CP symmetry either, the world of physics will be shaken up once more, although it’s hard to tell if any more hearts will be broken.

In fact, physicists might just express a newfound interest in mingling with neutrinos because of the essential difference between these particles on the one hand and kaons and B mesons on the other. Neutrinos are fundamental and indivisible whereas both kaons and B mesons are made up of smaller particles called quarks. This is why physicists have been able to explain CP symmetry violations in kaons and B mesons using what is called the quark-mixing model. If processes involving neutrinos are found to violate CP symmetry as well, then physicists will have twice as many sectors as before in which to explore the matter-antimatter problem.

The Wire
October 29, 2018

Train ride

What makes a train ride a train ride? I regularly travel between Bangalore and Chennai, using the morning Shatabdi every time. These train rides are not easy to love even the Shatabdi’s coaches have giant glass-panelled windows that offer beautiful views at sunrise and sunset.

But the train’s features significantly dent the experience. The seats make the passenger feel like she’s on a flight: they’re arranged two and four to each side with armrests separating each one of them. The tube-lights glow flaccid white and the clientele contains a lot of the corporate class, often combining to give the impression that you’re travelling in a mobile co-working space (the only exception to this rule, from what I’ve seen, is the overnight mail).

Although quite a few families also use the train, its single-day-journey offering often means that you’ve got people travelling light, likely taking the Shatabdi back the next day or the day after, people who – in the Shatabdi’s absence – would likely have flown instead of taking a different train. The tickets aren’t cheap: between 800 and 1,100 rupees for the common class (including catering), so you’re rubbing shoulders with the relatively better off.

I don’t mean here to romanticise the often-poorer conditions in which many of India’s middle- and lower-classes travel as much as to suggest that the Shatabdi, through the Indian Railways’ efforts to offer a sanitised and expedited experience, simply ends up being clinical in its rendition. Even the Double-decker Express between Bangalore and Chennai, with a travel time only a couple hours more than that of the Shatabdi, is more germane. You’ve got tiffin-, snack- and beverage-vendors passing through the aisles every 10 minutes, the train stopping and staring every hour or so, and simply that many more people per coach that it’s compelling to pass the time in conversation with the person sitting next to you. On the Shatabdi, all you want to do is look out the window.

I really miss the trains where you sit by the window on the lower berth, looking out through the powder-blue grills at a blue sky; share food with the people around you (if you’re also carrying Imodium, i.e.); go to bed grumbling about the berths not being long or wide enough; be careful that your belongings aren’t nicked while you’re dozing; wake up at an ungodly hour to pee, finding your way through the aisle under a dark blue nightlight; and get off at whatever station amid a sea of people instead of at one relatively unpopulated end leading straight to the gate.

Travelling – even in the form of a short journey between two nearby cities – can, and ought to, be a journey of integration, whether with yourself or the world around. The Shatabdi, though a very useful service, can be potentially isolating to the frequent user. Its utilitarian nature is hard to separate from its form itself, and as a vehicle it is the perfect metaphor for all the things we find undesirable – yet necessary – in our own lives.