The press office

A press-officer friend recently asked me for pointers on how he could help journalists cover the research institute he now works at better. My response follows:

  1. Avoid the traditional press release format and use something like Axios’s. answer the key questions, nothing more. No self-respecting organisation is going to want to republish press releases. This way also saves you time.
  2. Make scientists from within the institute, especially women, members of minority groups and postdocs, available for comment – whether on their own research or on work by others. This means keeping them available (at certain times if need be) and displaying their contact information.
  3. If you’re going to publish blogs, it would be great if they’re on a CC BY or BY-SA (or even something a little more restrictive like CC BY NC ND) license so that interested news organisations can republish them. If you’re using the ND license, please ensure the copy is clear.
  4. Pictures are often an issue. If you could take some nice pics on your phone and post them on, say, the CC library on Flickr, that would be great. These can be pics of the institute, instruments, labs, important people, events, etc.

If you have inputs/comments for my friend and subscribe to this blog, simply reply to the email in your inbox containing this post and you’ll reach me.

Indian scicomm’s upside-down world

A woman holder her right index finger over her lips, indicating silence.

Imagine a big, poisonous tree composed of all the things you need to screw up to render a field, discipline or endeavour an elite club of just one demographic group. When it comes to making it more inclusive, whether by gender, race, ethnicity, etc., the lowest of low-hanging fruit on this tree is quantitative correction: increase the number of those people there aren’t enough of. Such a solution should emerge from any straightforward acknowledgment that a problem exists together with a need to be seen to be acting quickly.

Now, the lower the part of the tree, the easier it should be to address. There’s a corresponding suckiness figure here, denoted by the inverse of the relative height of the thing from the ground: not plucking low-hanging fruits and throwing them away is the suckiest thing because doing so would be the easiest thing. For example, the National Centre for Science Communicators (NCSC) recently organised an event composed entirely of men – i.e. a manel – and it was the suckiest thing because manels are the most fixable solutions available to address gender disparities in science and science communication without requiring any cultural remediation.

The lidless eye of @IndScicomm picked up on this travesty and called the NCSC out on Twitter, inadvertently setting off an avalanche of responses, each one more surprised than the last over the various things the NCSC has let slip in this one image. Apart from the sausage fest, for example, all eight men are older (no need to guess numbers, they all look like boomers).

It’s possible:

  1. Each one of these men, apart from the one from the organising body, wasn’t aware he was going to be on a manel,
  2. They don’t recognise that there’s a problem,
  3. They recognise the problem but simply don’t care that there aren’t any women among them – by itself a consideration that limits itself to the smallest modicum of change but in its entirety should include a variety of people of various genders and castes, or
  4. They believe the principles of science communication are agnostic of – rather transcend – the medium used, and the medium is what has changed the most since the boomers until the millennials.

I find the last two options most plausible (the first two are forms of moral abdication), and only the last one worth discussing, because it seems to be applicable to a variety of science communication endeavours being undertaken around India with a distinct whiff of bureaucracy.

In December 2018, one of the few great souls that quietly flag interesting things brought to my attention an event called the ‘Indian Science Communication Congress’ (ISCC), ep. 18, organised by CSIR NISCAIR and commemorating the 200th year of ‘science journalism in India’. What happened in 1818? According to Manoj Patairiya, the current director of NISCAIR, “Science journalism started in India in 1818 with the publication of monthly Digdarshan published in Hindi, Bengali and English, carrying a few articles on science and technology.” This is a fairly troublesome description because of its partly outdated definition of science journalism, at least if NISCAIR considers what Digdarshan did to be science journalism, and because the statement implies a continuous presence of communication efforts in the country from the early 19th century – which I doubt has been the case.

I didn’t attend the event – not because I wasn’t invited or that I didn’t know such an event existed but because I wouldn’t have been the ideal participant given the format:

It seems (including based on one attendee’s notes) the science communication congress was a science of science communication + historical review congress, the former a particularly dubious object of study for its scientistic attitude, and which the ISCC’s format upholds with barely contained irony. Perhaps there’s one more explanation: an ancient filtration system (such as from 1951, when NISCAIR was set up) broke but no one bothered to fix it – i.e. the government body responsible for having scientists speak up about their work is today doing the bare minimum it needs to to meet whatever its targets have been, which includes gathering scholars of science communication in a room and having them present papers about how they think it can be improved, instead of setting new targets for a new era. This is the principal symptom of directive-based change-making.

Then again, I might be misguided on the congress’s purpose. On two fairly recent occasions – in August 2018 and September 2019 – heart-in-the-right-place scientists have suggested they could launch a journal, of all things, to help popularise science. Is it because scientists in general have trouble seeing beyond journals vis-à-vis the ideal/easiest way to present knowledge (if such a thing even exists); because they believe other scientists will take them more seriously if they’re reaching out via a journal; or because writing for a journal allows them to justify how they’re spending their time with their superiors?

The constructive dilemma inherent in the possible inability to imagine a collection of articles beyond journals also hints at a possible inability to see beyond the written article. But with the medium have changed the messages as well, together with ways in which people are seeking new information. Moreover, by fixating on science communication as a self-contained endeavour that doesn’t manifest outside of channels earmarked for it, we risk ignoring science communication when it happens in new, even radical, environments.

For example, we’re all learning about the role archaeological findings play in the construction of historical narratives by questioning the Supreme Court’s controversial verdict on the Ayodhya title case. For another, I once learnt about why computational fluid dynamics struggles to simulate flowing water (because of how messed up the Navier-Stokes equations are) during a Twitch livestream.

But if manel-ridden conferences and poster presentations are what qualify as science communication, and not just support for it, the hyperobject of our consternation as represented in the replies to @IndScicomm’s tweet is as distinct a world as Earth is relative to Jupiter, and we might all just be banging our heads over the failures of a different species of poisonous tree. Maybe NCSC and NISCAIR, the latter more so, mean something else when they say ‘science communication’.

Maybe the ‘science communication’ that The Wire or The Print, etc. practice is a tradition imported from a different part of the world, with its own legacy, semantics and purpose, such as to be addressed to English-speaking, upper-class urbanites. At a talk in Chennai last year, for example, a prominent science communicator mentioned that there were only a handful of science journalists in India, which could’ve been true if he was reading only English-language newspapers. Maybe these labels are in passive conflict with the state-sponsored variety of ‘science journalism’ that the government nurtured shortly after Independence to cater to lower-class, Indian-languages-speaking citizens of rural India, which didn’t become profitable until the advent of economic liberalisation and the internet, but which today – and perhaps as seen from the PoV of a different audience – seems bureaucratic and insipid.

Then again, the rise of the ‘people’s science movement’ in the 1970s, led by organisations like Eklavya, Kalpavriksh, Vidushak Karkhana, Vigyan Shiksha Kendra and Medico Friend Circle would suggest that ‘science communication’ of the latter variety wasn’t entirely successful. Thanks also to Gauhar Raza, the scientist and social activist who spent years studying the impact of government-backed science communication initiatives and came away unable to tell if they had succeeded at all, and given what we’re seeing of NCSC’s, NISCAIR’s and the science congress’s activities, it may not be unreasonable to ask if the two ‘science communications’ are simply two different worlds or a new one still finding its footing and an older one whose use-case is rapidly diminishing.

Ultimately, let’s please stop inviting discussion on science communication through abstracts and research papers, organising “scientific sessions” for a science communication congress (which seems to be in the offing at a ‘science communicator’s meet’ at the 2020 Indian Science Congress as well) and having old men deliberate on “recent trends in science communication” – and turn an ear to practising communicators and journalists instead.

A new map of Titan

Cassini's last shot of Titan, taken by the probe's narrow-angle camera on September 13, 2017. Credit: NASA

It’s been a long time since I’ve obsessed over Titan, primarily because after the Cassini mission ended, the pace of updates about Titan died down, and because other moons of the Solar System (Europa, Io, Enceladus, Ganymede and our own) became more important. There have been three or four notable updates since my last post about Titan but this post that you’re reading has been warranted by the fact that scientists recently released the first global map of the Saturnian moon.

(This Nature article offers a better view but it’s copyrighted. The image above is a preview offered by Nature Astronomythe paper itself is behind a paywall and I couldn’t find a corresponding copy on Sci-Hub or arXiv nor have I written to the corresponding author – yet.)

It’s fitting that Titan be accorded this privilege – of a map of all locations on the planetary body – because it is by far the most interesting of the Solar System’s natural satellites (although Europa and Triton come very close) and were it not orbiting the ringed giant, it could well be a planet of its own accord. I can think of a lot of people who’d agree with this assessment but most of them tend to focus on Titan’s potential for harbouring life, especially since NASA’s going to launch the Dragonfly mission to the moon in 2026. I think they’ve got it backwards: there are a lot of factors that need to come together just right for any astronomical body to host life, and fixating on habitability combines these factors and flattens them to a single consideration. But Titan is amazing because it’s got all these things going on, together with many other features that habitability may not be directly concerned with.

While this is the first such map of Titan, and has received substantial coverage in the popular press, it isn’t the first global assessment of its kind. Most recently, in December 2017, scientists (including many authors of the new paper) published two papers of the moon’s topographical outlay (this and this), based on which they were able to note – among other things – that Titan’s three seas have a common sea level; many lakes have surfaces hundreds of meters above this level (suggesting they’re elevated and land-locked); many lakes are connected under the surface and drain into each other; polar lakes (the majority) are bordered by “sharp-edged depressions”; and Titan’s crust has uneven thickness as evidenced by its oblateness.

According to the paper’s abstract, the new map brings two new kinds of information to the table. First, the December 2017 papers were based on hi- and low-res images of about 40% of Titan’s surface whereas, for the new map, the authors write: “Correlations between datasets enabled us to produce a global map even where datasets were incomplete.” More specifically, areas for which authors didn’t have data from Cassini’s Synthetic Aperture Radar instrument for were mapped at 1:2,000,000 scale whereas areas with data enabled a map at 1:8,000,000 scale. Second is the following inferences of the moon’s geomorphology (from the abstract the authors presented to a meeting of the American Astronomical Society in October 2018):

We have used all available datasets to extend the mapping initially done by Lopes et al. We now have a global map of Titan at 1:800,000 scale in all areas covered by Synthetic Aperture Radar (SAR). We have defined six broad classes of terrains following Malaska et al., largely based on prior mapping. These broad classes are: craters, hummocky/mountainous, labyrinth, plains, lakes, and dunes [see image below]. We have found that the hummocky/mountainous terrains are the oldest units on the surface and appear radiometrically cold, indicating icy materials. Dunes are the youngest units and appear radiometrically warm, indicating organic sediments.

SAR images of the six morphological classes (in the order specified in the abstract)

More notes once I’ve gone through the paper more thoroughly. And if you’d like to read more about Titan, here’s a good place to begin.

The trouble with laser-cooling anions

For scientists to use lasers to cool an atom, the atom needs to have two energy states. When laser light is shined on an atom moving towards the source of light, one of its electrons absorbs a photon, climbs to a higher energy state and the atom as a whole loses some momentum. A short span of time later, the electron loses the photon in a random direction and drops back to its lower energy state, and the atom’s momentum changes only marginally.

By repeating this series of steps over and over, scientists can use lasers to considerably slow atoms and decrease their temperature as well. For a more detailed description + historical notes (including a short profile of a relatively forgotten Indian scientist who contributed to the development of laser-cooling technologies), read this post.

However, it’s hard to use this technique with most anions – negatively charged ions – because they don’t have a higher energy state per se. Instead, when laser light is shined on the atom, the electron responsible for the excess negative charge absorbs the photon and the atom simply ejects the energised electron.

If the technique is to work, scientists need to find an anion that is bound to its one excess electron (keeping it from being electrically neutral) strongly enough that as the electron acquires more energy, the atom ascends to a higher energy state with it instead of just losing it. Scientists discovered the first such anion in the previous decade – osmium – and have since added only three more candidates to the list: lanthanum, cerium and diatomic carbon (C2). Lanthanum is and remains the most effective anion coolable with lasers. However, if the results of a study published on November 12 are to be believed, the thorium anion could be the new champion.

Laser-cooling is relatively simpler than most atomic cooling techniques, such as laser-assisted evaporative cooling, and is known to be very effective. Applying it to anions would expand its gamut of applications. There are also techniques like sympathetic cooling, in which one type of laser-cooled anions can cool other types of anions trapped in the same container. This way, for example, physicists think they can produce ultra-cold anti-hydrogen atoms required to study the similarities between matter and antimatter.

The problem with finding a suitable anion is centred on the atom’s electron affinity. It’s the amount of energy an electrically neutral atom gains or loses when it takes on one more electron and becomes an anion. If the atom’s electron affinity is too low, the energy imparted or taken away by the photons could free the electron.

Until recently, theoretical calculations suggested the thorium anion had an electron affinity of around 0.3 eV – too low. However, the new study found based on experiments and calculations that the actual figure could be twice as high, around 0.6 eV, advancing the thorium anion as a new candidate for laser-cooling.

The study’s authors also report other properties that make thorium even more suitable than lanthanum. For example, the atomic nucleus of the sole stable lanthanum isotope has a spin, so as it interacts with the magnetic field produced by the electrons around it, it subtly interferes with the electrons’ energy levels and makes laser-cooling more complicated than it needs to be. Thorium’s only stable isotope has zero nuclear spin, so these complications don’t arise.

There doesn’t seem to be a working proof of the study’s results but it’s only a matter of time before other scientists devise a test because the study itself makes a few concrete predictions. The researchers expect that thorium anions can be cooled with laser light of frequency 2.6 micrometers to a frosty 0.04 microkelvin. They suggest doing this in two steps: first cooling the anions to around 10 kelvin and then cooling a collection of them further by enabling the absorption and emission of about 27,000 photons, tuned to the specified frequency, in a little under three seconds.

Disastrous hype

A cloud of grey-black smoke erupts over a brown field, likely the result of an explosion of some sort.

This is one of the worst press releases accompanying a study I’ve seen:

The headline and the body appear to have nothing to do with the study itself, which explores the creative properties of an explosion with certain attributes. However, the press office of the University of Central Florida has drafted a popular version that claims researchers – who are engineers more than physicists – have “detailed the mechanisms that could cause the [Big Bang] explosion, which is key for the models that scientists use to understand the origin of the universe.” I checked with a physicist, who agreed: “I don’t see how this is relevant to the Big Bang at all. Considering the paper is coming out of the department of mechanical and aerospace engineering, I highly doubt the authors intended for it to be reported on this way.”

Press releases that hype results are often the product of an overzealous university press office working without inputs from the researchers that obtained those results, and this is probably the case here as well. The paper’s abstract and some quotes by one of the researchers, Kareem Ahmed from the University of Central Florida, indicate the study isn’t about the Big Bang but about similarities between “massive thermonuclear explosions in space and small chemical explosions on Earth”. However, the press release’s author slipped in a reference to the Big Bang because, hey, it was an explosion too.

The Big Bang was like no other stellar explosion; its material constituents were vastly different from anything that goes boom today – whether on Earth or in space – and physicists have various ideas about what could have motivated the bang to happen in the first place. The first supernovas are also thought to have occurred a few billion years after the Big Bang. This said, Ahmed was quoted saying something that could have used more clarification in the press release:

We explore these supersonic reactions for propulsion, and as a result of that, we came across this mechanism that looked very interesting. When we started to dig deeper, we realized that this is relatable to something as profound as the origin of the universe.


The climate and the A.I.

A few days ago, the New York Times and other major international publications sounded the alarm over a new study that claimed various coastal cities around the world would be underwater to different degrees by 2050. However, something seemed off; it couldn’t have been straightforward for the authors of the study to plot how much the sea-level rise would affect India’s coastal settlements. Specifically, the numbers required to calculate how many people in a city would be underwater aren’t readily available in India, if at all they do exist. Without this bit of information, it’s easy to disproportionately over- or underestimate certain outcomes for India on the basis of simulations and models. And earlier this evening, as if on cue, this thread appeared:

This post isn’t a declaration of smugness (although it is tempting) but to turn your attention to one of Palanichamy’s tweets in the thread:

One of the biggest differences between the developed and the developing worlds is clean, reliable, accessible data. There’s a reason exists whereas in India, data discovery is as painstaking a part of the journalistic process as is reporting on it and getting the report published. Government records are fairly recent. They’re not always available at the same location on the web ( has been remedying this to some extent). They’re often incomplete or not machine-readable. Every so often, the government doesn’t even publish the data – or changes how it’s obtained, rendering the latest dataset incompatible with previous versions.

This is why attempts to model Indian situations and similar situations in significantly different parts of the world (i.e. developed and developing, not India and, say, Mexico) in the same study are likely to deviate from reality: the authors might have extrapolated the data for the Indian situation using methods derived from non-native datasets. According to Palanichamy, the sea-level rise study took AI’s help for this – and herein lies the rub. With this study itself as an example, there are only going to be more – and potentially more sensational – efforts to determine the effects of continued global heating on coastal assets, whether cities or factories, paralleling greater investments to deal with the consequences.

In this scenario, AI, and algorithms in general, will only play a more prominent part in determining how, when and where our attention and money should be spent, and controlling the extent to which people think scientists’ predictions and reality are in agreement. Obviously the deeper problem here lies with the entities responsible for collecting and publishing the data – and aren’t doing so – but given how the climate crisis is forcing the world’s governments to rapidly globalise their action plans, the developing world needs to inculcate the courage and clarity to slow down, and scrutinise the AI and other tools scientists use to offer their recommendations.

It’s not a straightforward road from having the data to knowing what it implies for a city in India, a city in Australia and a city in Canada.

India’s Delhi-only air pollution problem

I woke up this morning to a PTI report telling me Delhi’s air quality had fallen to ‘very poor’ on Deepavali, the Hindu ostensible festival of lights, with many people defying the Supreme Court’s direction to burst firecrackers only between 8 pm and 10 pm. This defiance is unsurprising: the Supreme Court doesn’t apply to Delhi because, and not even though, the response to the pollution was just Delhi-centric.

In fact, it’s probably only a problem because Delhi is having trouble breathing, despite the fact that the national capital is the eleventh-most polluted city in the world, behind eight other Indian ones.

The report also noted, “On Saturday, the Delhi government launched a four-day laser show to discourage residents from bursting firecrackers and celebrating Diwali with lights and music. During the show, laser lights were beamed in sync with patriotic songs and Ramayana narration.”

So the air pollution problem rang alarm bells and the government solved just that problem. Nothing else was a problem so it solved nothing else. The beams of light the Delhi government shot up into the sky would have caused light pollution, disturbing insects, birds and nocturnal creatures. The sound would no doubt have been loud, disturbing animals and people in the area. It’s a mystery why we don’t have familial, intimate celebrations.

There is a concept in environmental philosophy called the hyperobject: a dynamic super-entity that lots of people can measure and feel at the same time but not see or touch. Global warming is a famous hyperobject, described by certain attributes, including its prevalence and its shifting patterns. Delhi’s pollution has two hyperobjects. One is what the urban poor experiences – a beast that gets in the way of daily life, that you can’t wish away (let alone fight), and which is invisible to everyone else. The is the one in the news: stunted, inchoate and classist, it includes only air pollution because its effects have become unignorable, and sound and light don’t feature in it – nor does anything even a degree removed from the singular sources of smoke and fumes.

For example, someone (considered smart) recently said to me, “The city should collect trash better to avoid roadside garbage fires in winter.” Then what about the people who set those fires for warmth because they don’t have warm shelter for the night? “They will find another way.”

The Delhi-centrism is also visible with the ‘green firecrackers’ business. According to the CSIR National Environmental Engineering Research Institute (NEERI), which developed the crackers, its scientists “developed new formulations for reduced emission light and sound emitting crackers”. But it turns out the reduction doesn’t apply to sound.

The ‘green’ crackers’ novel features include “matching performance in sound (100-120dBA) with commercial crackers”. A 100-120 dBA is debilitating. The non-crazy crackers clock about 60-80 dBA. (dB stands for decibels, a logarithmic measure of sound pressure change; the ‘A’ corresponds to the A-setting, a scale used to measure sounds according to human loudness.)

In 2014, during my neighbours’ spate of cracker-bursting, I “used an app to make 300 measurements over 5 minutes” from a distance of about 80 metres, and obtained the following readings:

Min: 41.51 dB(A)
Max: 83.88 dB(A)
Avg.: 66.41 dB(A)

The Noise Pollution (Regulation and Control) Rules 2000 limit noise in the daytime (6 am to 10 pm) to 55 dB(A), and the fine for breaking the rules was just Rs 100, or $1.5, before the Supreme Court stepped up taking cognisance of the air pollution during Deepavali. This is penalty is all the more laughable considering Delhi was ranked the world’s second-noisiest city in 2017. There’s only so much the Delhi police, including traffic police, can do, with the 15 noise meters they’ve been provided.

In February 2019, Romulus Whitaker, India’s ‘snake man’, expressed his anguish over a hotel next door to the Madras Crocodile Bank Trust blasting loud music that was “triggering aberrant behaviour” among the animals (to paraphrase the author). If animals don’t concern you: the 2014 Heinz Nixdorf Recall study found noise is a risk factor for atherosclerosis. Delhi’s residents also have the “maximum amount of hearing loss proportionate to their age”.

As Dr Deepak Natarajan, a Delhi-based cardiologist, wrote in 2015, “It is ironic that the people setting out to teach the world the salutatory effects of … quietness celebrate Yoga Day without a thought for the noise that we generate every day.”

Someone else tweeted yesterday, after purchasing some ‘green’ firecrackers, that science “as always” (or something similar) provided the solution. But science has no agency: like a car, people drive it. It doesn’t ask questions about where the driver wants to go or complain when he drives too rashly. And in the story of fixing Delhi’s air pollution, the government has driven the car like Salman Khan.

New Scientist violates the laws of physics (updated)

new article in the New Scientist begins with a statement of Newton’s third law that is blissfully ignorant of the irony. The article’s headline is:

The magazine is notorious for its use of sensationalist headlines and seems to have done it again. Jon Cartwright, the author of the article, has done a decent job of explaining the ‘helical drive’ proposed by a manager at NASA named David Burns, and hasn’t himself suggested that the drive violates any laws of physics. It seems more like someone else was responsible for the headline and decided to give it the signature New Scientist twist.

The featured image is a disaster, showing concept art of Robert Shawyer’s infamous em-drive. Shawyer had claimed the device could in fact violate the laws of physics by converting the momentum of microwaves confined in a chamber into thrust. Various experts have debunked the em-drive as fantasy, but their caution against suggesting the laws of physics could be broken so easily appears to have missed the New Scientist.

Update, 7.06 am, October 16, 2019: In a new article, Chris Lee at Ars Technica has explained why the helical drive won’t work, and comes down harshly on Burns for publicising his idea before getting it checked with his peers at NASA, which would’ve spared him the embarrassment that Lee dished out. That said, Lee is also a professional physicist, and perhaps Cartwright isn’t entirely in the clear if the answer to why the helical drive won’t work is as straightforward as Lee makes it to be.

With the helical drive, Burns proposes to use an object that moves back and forth inside a box, bouncing off either end. Each bounce imparts momentum to the box but the net momentum after two bounces is zero because they’re in equal and opposite directions. But if the object could become heavier just before it strikes one end and lighter before it strikes the other, the box will receive a ‘kick’ at one end and start moving that direction.

Burns then says if we could replace the object with a particle and the box with a particle accelerator, it should be possible to accelerate the particle in one direction, let it bounce off, then decelerate it in the other direction and recover most of the energy imparted to it, and repeat. This way, the whole setup can be made to constantly accelerate in one direction.

The flip side is that the mass-energy equivalence is central to Burns’s idea, but according to the theory of special relativity that it’s embedded in, it’s actually the mass-energy-momentum equivalence. As Lee put it, special relativity conserves energy and momentum together, which means a heavier particle bouncing off one end of the setup won’t keep accelerating the setup in its direction. Instead, when the particle becomes heavier and acquires more momentum, it does so by absorbing virtual photons from an omnipresent energy field. When the particle slows down, it emits these photons into the field around it.

According to special relativity and Newton’s third law, the release process will accelerate the setup, and the absorption process will decelerate the setup. The particle knocking on either ends is just incidental.

A revolutionary exoplanet

An artist's impression of 51 Pegasi b, with its star in the background. Credit: NASA

In 1992, Aleksander Wolszczan and Dale Frail became the first astronomers to publicly announce that they had discovered the first planets outside the Solar System, orbiting the dense core of a dead star about 2,300 lightyears away. This event is considered to be the first definitive detection of exoplanets, a portmanteau of extrasolar planets. However, Michel Mayor and Didier Queloz were recognised today with one half of the 2019 Nobel Prize for physics for discovering an exoplanet three years after Wolszczan and Frail did. This might be confusing – but it becomes clear once you stop to consider the planet itself.

51 Pegasi b orbits a star named 51 Pegasi about 50 lightyears away from Earth. In 1995, Queloz and Mayor were studying the light and other radiation coming from the star when they noticed that it was wobbling ever so slightly. By measuring the star’s radial velocity and using an analytical technique called Doppler spectroscopy, Queloz and Mayor realised there was a planet orbiting it. Further observations indicated that the planet was a ‘hot Jupiter’, a giant planet with a surface temperature of ~1,000º C orbiting really close to the star.

In 2017, Dutch and American astronomers studied the planet in even greater detail. They found its atmosphere was 0.01% water (a significant amount), it weighed about half as much as Jupiter and orbited 51 Pegasi once every four days.

This was surprising. 51 Pegasi is a Sun-like star, meaning its brightness and colour are similar to the Sun’s. However, this ‘foreign’ system looked nothing like our own Solar System. It contained a giant planet much like Jupiter but which was a lot closer to its star than Mercury is to the Sun.

Astronomers were startled because their ideas of what a planetary system should look like was based on what the Solar System looked like: the Sun at the centre, four rocky planets in the inner system, followed by gas- and ice-giants and then a large, ringed debris field in the form of an outer asteroid belt. Many researchers even thought hot Jupiters couldn’t exist. But the 51 Pegasi system changed all that.

It was so different that Queloz and Mayor were first met with some skepticism, including questions about whether they’d misread the data and whether the wobble they’d seen was some quirk of the star itself. However, as time passed, astronomers only became more convinced that they indeed had an oddball system on their hands. David Gray had penned a paper in 1997 arguing that 51 Pegasi’s wobble could be understood without requiring a planet to orbit it. He published another paper in 1998 correcting himself and lending credence to Queloz’s and Mayor’s claim. The duo received bigger support by inspiring other astronomers to take another look at their data and check if they’d missed any telltale signs of a planet. In time, they would discover more hot Jupiters, also called pegasean planets, orbiting conventional stars.

Through the next decade, it would become increasingly clear that the oddball system was in fact the Solar System. To date, astronomers have confirmed the existence of over 4,100 exoplanets. None of them belong to planetary systems that look anything like our own. More specifically, the Solar System appears to be unique because it doesn’t have any planets really close to the Sun; doesn’t have any planets heavier than Earth but lighter than Neptune – an unusually large mass gap; and most of whose planets revolve in nearly circular orbits.

Obviously the discovery forced astronomers to rethink how the Solar System could have formed versus how typical exoplanetary systems form. For example, scientists were able to develop two competing models for how hot Jupiters could have come to be: either by forming farther away from the host star and then migrating inwards or by forming much closer to the star and just staying there. But as astronomers undertook more observations of stars in the universe, they realised the region closest to the star often doesn’t have enough material to clump together to form such large planets.

Simulations also suggest than when a Jupiter-sized planet migrates from 5 AU to 0.1 AU, its passage could make way for Earth-mass planets to later form in the star’s habitable zone. The implication is that planetary systems that have hot Jupiters could also harbour potentially life-bearing worlds.

But there might not be many such systems. It’s notable that fewer than 10% of exoplanets are known to be hot Jupiters (only seven of them have an orbital period of less than one Earth-day). They’re just more prominent in the news as well as in the scientific literature because astronomers think they’re more interesting objects of study, further attesting to the significance of 51 Pegasi b. But even in their low numbers, hot Jupiters have been raising questions.

For example, according to data obtained by the NASA Kepler space telescope, which looked for the fleeting shadows that planets passing in front of their stars cast on the starlight, only 0.3-0.5% of the stars it observed had hot Jupiters. But observations using the radial velocity method, which Queloz and Mayor had also used in 1995, indicated a prevalence of 1.2%. Jason Wright, an astronomer at the Pennsylvania State University, wrote in 2012 that this discrepancy signalled a potentially deeper mystery: “It seems that the radial velocity surveys, which probe nearby stars, are finding a ‘hot-Jupiter rich’ environment, while Kepler, probing much more distant stars, sees lots of planets but hardly any hot Jupiters. What is different about those more distant stars? … Just another exoplanet mystery to be solved…”.

All of this is the legacy of the discovery of 51 Pegasi b. And given the specific context in which it was discovered and how the knowledge of its existence transformed how we think about our planetary neighbourhoods and neighbourhoods in other parts of the universe, it might be fair to say the Nobel Prize for Queloz and Mayor is in recognition of their willingness to stand by their data, seeing a planet where others didn’t.

The Wire
October 8, 2019

Disentangling entanglement

There has been considerable speculation if the winners of this year’s Nobel Prize for physics, due to be announced at 2.30 pm IST on October 8, will include Alain Aspect and Anton Zeilinger. They’ve both made significant experimental contributions related to quantum information theory and the fundamental nature of quantum mechanics, including entanglement.

Their work, at least the potentially prize-winning part of it, is centred on a class of experiments called Bell tests. If you perform a Bell test, you’re essentially checking the extent to which the rules of quantum mechanics are compatible with the rules of classical physics.

Whether or not Aspect, Zeilinger and/or others win a Nobel Prize this year, what they did achieve is worth putting in words. Of course, many other writers, authors, scientists, etc. have already performed this activity; I’d like to redo it if only because writing helps commit things to memory and because the various performers of Bell tests are likely to win some prominent prize, given how modern technologies like quantum cryptography are inflating the importance of their work, and at that time I’ll have ready reference material.

(There is yet another reason Aspect and Zeilinger could win a Nobel Prize. As with the medicine prizes, many of whose laureates previously won a Lasker Award, many of the physics laureates have previously won the Wolf Prize. And Aspect and Zeilinger jointly won the Wolf Prize for physics in 2010 along with John Clauser.)

The following elucidation is divided into two parts: principles and tests. My principal sources are Wikipedia, some physics magazines, Quantum Physics for Poets by Leon Lederman and Christopher Hill (2011), and a textbook of quantum mechanics by John L. Powell and Bernd Crasemann (1998).



From the late 1920s, Albert Einstein began to publicly express his discomfort with the emerging theory of quantum mechanics. He claimed that a quantum mechanical description of reality allowed “spooky” things that the rules of classical mechanics, including his theories of relativity, forbid. He further contended that both classical mechanics and quantum mechanics couldn’t be true at the same time and that there had to be a deeper theory of reality with its own, thus-far hidden variables.

Remember the Schrödinger’s cat thought experiment: place a cat in a box with a bowl of poison and close the lid; until you open the box to make an observation, the cat may be considered to be both alive and dead. Erwin Schrödinger came up with this example to ridicule the implications of Niels Bohr’s and Werner Heisenberg’s idea that the quantum state of a subatomic particle, like an electron, was described by a mathematical object called the wave function.

The wave function has many unique properties. One of these is superposition: the ability of an object to exist in multiple states at once. Another is decoherence (although this isn’t a property as much as a phenomenon common to many quantum systems): when you observed the object. it would probabilistically collapse into one fixed state.

Imagine having a box full of billiard balls, each of which is both blue and green at the same time. But the moment you open the box to look, each ball decides to become either blue or green. This (metaphor) is on the face of it a kooky description of reality. Einstein definitely wasn’t happy with it; he believed that quantum mechanics was just a theory of what we thought we knew and that there was a deeper theory of reality that didn’t offer such absurd explanations.

In 1935, Einstein, Boris Podolsky and Nathan Rosen advanced a thought experiment based on these ideas that seemed to yield ridiculous results, in a deliberate effort to provoke his ‘opponents’ to reconsider their ideas. Say there’s a heavy particle with zero spin – a property of elementary particles – inside a box in Bangalore. At some point, it decays into two smaller particles. One of these ought to have a spin of 1/2 and other of -1/2 to abide by the conservation of spin. You send one of these particles to your friend in Chennai and the other to a friend in Mumbai. Until these people observe their respective particles, the latter are to be considered to be in a mixed state – a superposition. In the final step, your friend in Chennai observes the particle to measure a spin of -1/2. This immediately implies that the particle sent to Mumbai should have a spin of 1/2.

If you’d performed this experiment with two billiard balls instead, one blue and one green, the person in Bangalore would’ve known which ball went to which friend. But in the Einstein-Podolsky-Rosen (EPR) thought experiment, the person in Bangalore couldn’t have known which particle was sent to which city, only that each particle existed in a superposition of two states, spin 1/2 and spin -1/2. This situation was unacceptable to Einstein because it was inimical certain assumptions on which the theories of relativity were founded.

The moment the friend in Chennai observed her particle to have spin -1/2, the one in Mumbai would have known without measuring her particle that it had a spin of 1/2. If it didn’t, the conservation of spin would be violated. If it did, then the wave function of the Mumbai particle would have collapsed to a spin 1/2 state the moment the wave function of the Chennai particle had collapsed to a spin -1/2 state, indicating faster-than-light communication between the particles. Either way, quantum mechanics could not produce a sensible outcome.

Two particles whose wave functions are linked the way they were in the EPR paradox are said to be entangled. Einstein memorably described entanglement as “spooky action at a distance”. He used the EPR paradox to suggest quantum mechanics couldn’t possibly be legit, certainly not without messing with the rules that made classical mechanics legit.

So the question of whether quantum mechanics was a fundamental description of reality or whether there were any hidden variables representing a deeper theory stood for nearly thirty years.

Then, in 1964, an Irish physicist at CERN named John Stewart Bell figured out a way to answer this question using what has since been called Bell’s theorem. He defined a set of inequalities – statements of the form “P is greater than Q” – that were definitely true for classical mechanics. If an experiment conducted with electrons, for example, also concluded that “P is greater than Q“, it would support the idea that quantum mechanics (vis-à-vis electrons) has ‘hidden’ parts that would explain things like entanglement more along the lines of classical mechanics.

But if an experiment couldn’t conclude that “P is greater than Q“, it would support the idea that there are no hidden variables, that quantum mechanics is a complete theory and, finally, that it implicitly supports spooky actions at a distance.

The theorem here was a statement. To quote myself from a 2013 post (emphasis added):

for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or [faster-than-light] communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed [like electrons or protons].

Zeilinger and Aspect, among others, are recognised for having performed these experiments, called Bell tests.

Technological advancements through the late 20th and early 21st centuries have produced more and more nuanced editions of different kinds of Bell tests. However, one thing has been clear from the first tests, in 1981, to the last: they have all consistently violated Bell’s inequalities, indicating that quantum mechanics does not have hidden variables and our reality does allow bizarre things like superposition and entanglement to happen.

To quote from Quantum Physics for Poets (p. 214-215):

Bell’s theorem addresses the EPR paradox by establishing that measurements on object a actually do have some kind of instant effect on the measurement at b, even though the two are very far apart. It distinguishes this shocking interpretation from a more commonplace one in which only our knowledge of the state of b changes. This has a direct bearing on the meaning of the wave function and, from the consequences of Bell’s theorem, experimentally establishes that the wave function completely defines the system in that a ‘collapse’ is a real physical happening.


Though Bell defined his inequalities in such a way that they would lend themselves to study in a single test, experimenters often stumbled upon loopholes in the result as a consequence of the experiment’s design not being robust enough to evade quantum mechanics’s propensity to confound observers. Think of a loophole as a caveat; an experimenter runs a test and comes to you and says, “P is greater than Q but…”, followed by an excuse that makes the result less reliable. For a long time, physicists couldn’t figure out how to get rid of all these excuses and just be able to say – or not say – “P is greater than Q“.

If millions of photons are entangled in an experiment, the detectors used to detect, and observe, the photons may not be good enough to detect all of them or the photons may not survive their journey to the detectors properly. This fair-sampling loophole could give rise to doubts about whether a photon collapsed into a particular state because of entanglement or if it was simply coincidence.

To prevent this, physicists could bring the detectors closer together but this would create the communication loophole. If two entangled photons are separated by 100 km and the second observation is made more than 0.0003 seconds after the first, it’s still possible that optical information could’ve been exchanged between the two particles. To sidestep this possibility, the two observations have to be separated by a distance greater than what light could travel in the time it takes to make the measurements. (Alain Aspect and his team also pointed their two detectors in random directions in one of their tests.)

Third, physicists can tell if two photons received in separate locations were in fact entangled with each other, and not other photons, based on the precise time at which they’re detected. So unless physicists precisely calibrate the detection window for each pair, hidden variables could have time to interfere and induce effects the test isn’t designed to check for, creating a coincidence loophole.

If physicists perform a test such that detectors repeatedly measure the particles involved in, say, two labs in Chennai and Mumbai, it’s not impossible for statistical dependencies to arise between measurements. To work around this memory loophole, the experiment simply has to use different measurement settings for each pair.

Apart from these, experimenters also have to minimise any potential error within the instruments involved in the test. If they can’t eliminate the errors entirely, they will then have to modify the experimental design to compensate for any confounding influence due to the errors.

So the ideal Bell test – the one with no caveats – would be one where the experimenters are able to close all loopholes at the same time. In fact, physicists soon realised that the fair-sampling and communication loopholes were the more important ones.

In 1972, John Clauser and Stuart Freedman performed the first Bell test by entangling photons and measuring their polarisation at two separate detectors. Aspect led the first group that closed the communication loophole, in 1982; he subsequently conducted more tests that improved his first results. Anton Zeilinger and his team made advancements on the fair-sampling loophole.

One particularly important experimental result showed up in August 2015: Robert Hanson and his team at the Technical University of Delft, in the Netherlands, had found a way to close the fair-sampling and communication loopholes at the same time. To quote Zeeya Merali’s report in Nature News at the time (lightly edited for brevity):

The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other – and this caused both their partner electrons to become entangled, too. … the team managed to generate 245 entangled pairs of electrons over … nine days. The team’s measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.

By December 2015, Anton Zeilinger and co. were able to close the communication and fair-sampling loopholes in a single test with a 1-in-2-octillion chance of error, using a different experimental setup from Hanson’s. In fact, Zeilinger’s team actually closed three loopholes including the freedom-of-choice loophole. According to Merali, this is “the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct”.

But at the time Hanson et al announced their result, Matthew Leifer, a physicist the Perimeter Institute in Canada, told Nature News (in the same report) that because “we can never prove that [the converse of freedom of choice] is not the case, … it’s fair to say that most physicists don’t worry too much about this.”

We haven’t gone into much detail about Bell’s inequalities themselves but if our goal is to understand why Aspect and Zeilinger, and Clauser too, deserve to win a Nobel Prize, it’s because of the ingenious tests they devised to test Bell’s, and Einstein’s, ideas and the implications of what they’ve found in the process.

For example, Bell crafted his test of the EPR paradox in the form of a ‘no-go theorem’: if it satisfied certain conditions, a theory was designated non-local, like quantum mechanics; if it didn’t satisfy all those conditions, the theory be classified as local, like Einstein’s special relativity. So Bell tests are effectively gatekeepers that can attest whether or not a theory – or a system – is behaving in a quantum way and each loophole is like an attempt to hack the attestation process.

In 1991, Artur Ekert, who would later be acknowledged as one of the inventors of quantum cryptography, realised this perspective could have applications in securing communications. Engineers could encode information in entangled particles, send them to remote locations, and allow detectors there to communicate with each other securely by observing these particles and decoding the information. The engineers can then perform Bell tests to determine if anyone might be eavesdropping on these communications using one or some of the loopholes.

The virtues and vices of reestablishing contact with Vikram

An artist's impression of the Vikram lander after completing its lunar touchdown, extending a ramp to let the Pragyan rover out. Credit: ISRO

There was a PTI report yesterday that the Indian Space Research Organisation (ISRO) is still trying to reestablish contact with the Vikram lander of the Chandrayaan 2 mission. The lander had crashed onto the lunar surface on September 7 instead of touching down. The incident severed its communications link with ISRO ground control, leaving the org. unsure about the lander’s fate although all signs pointed to it being kaput.

Subsequent attempts to photograph the designated landing site using the Chandrayaan 2 orbiter as well as the NASA Lunar Reconnaissance Orbiter didn’t provide any meaningful clues about what could’ve happened except that the crash-landing could’ve smashed Vikram to pieces too small to be observable from orbit.

When reporting on ISRO or following the news about developments related to it, the outside-in view is everything. It’s sort of like a mapping between two sets. If the first set represents the relative significance of various projects within ISRO and the second the significance as perceived by the public according to what shows up in the news, then Chandrayaan 2, human spaceflight and maybe the impending launch of the Small Satellite Launch Vehicle are going to look like moderately sized objects in set 1 but really big in set 2.

The popular impression of what ISRO is working on is skewed towards projects that have received greater media coverage. This is a pithy truism but it’s important to acknowledge because ISRO’s own public outreach is practically nonexistent, so there are no ‘normalising’ forces working to correct the skew.

This is why it seems like a problem when ISRO – after spending over a week refusing to admit that the Chandrayaan 2 mission’s surface component had failed and its chairman K. Sivan echoing an internal review’s claim that the mission had in fact succeeded to the extent of 98% – says it’s still trying to reestablish contact without properly describing what that means.

It’s all you hear about vis-à-vis the Indian space programme in the news these days, if not about astronaut training or that the ‘mini-PSLV’ had a customer even before it had a test flight, potentially contribute to the unfortunate impression that these are ISRO’s priorities at the moment when in fact the relative significance of these missions – i.e. their size within set 1 – is arranged differently.

For example, the idea of trying to reestablish contact with the Vikram lander has been featured in at least three news reports in the last week, subsequently amplified through republishing and syndication, whereas the act of reestablishing contact could be as simple as one person pointing an antenna in the general direction of the Vikram lander, blasting a loud ‘what’s up’ message in the radio frequency and listening intently for a ‘not much’ reply. On the other hand, there’s a bunch of R&D, manufacturing practices and space-science discussions ISRO’s currently working on but which receive little to no coverage in the mainstream press.

So when Sivan repeatedly states across many days that they’re still trying to reestablish contact with Vikram, or when he’s repeatedly asked the same question by journalists with no imagination about ISRO’s breadth and scope, it may not necessarily signal a reluctance to admit failure in the face of overwhelming evidence that the mission has in fact failed (e.g., apart from not being able to visually spot the lander, the lander’s batteries aren’t designed to survive the long and freezing lunar night, so it’s extremely unlikely that it has power to respond to the ‘what’s up’). It could just be that either Sivan, the journalists or both – but it’s unlikely to be the journalists unless they’re aware of the resources it takes to attempt to reestablish contact – are happy to keep reminding the people that ISRO’s going to try very, very hard before it can abandon the lander.

Such metronymic messaging is politically favourable as well to maintain the Chandrayaan 2 mission’s place in the nationalist techno-pantheon. But it should also be abundantly clear at this point that Sivan’s decision to position himself as the organisation’s sole point of contact for media professionals at the first hint of trouble, his organisation’s increasing opacity to public view, if not scrutiny, and many journalists’ inexplicable lack of curiosity about things to ask the chairman all feed one another, ultimately sidelining other branches of ISRO and the public interest itself.

Authority, authoritarianism and a scicomm paradox

I received a sharp reminder to better distinguish between activists and experts irrespective of how right the activists appear to be with the case of Ustad, that tiger shifted from its original habitat in Ranthambore sanctuary to Sajjangarh Zoo in 2015 after it killed three people. Local officials were in favour of the relocation to make life easier for villagers whose livelihoods depended on the forest whereas activists wanted Ustad to be brought back to Ranthambore, citing procedural irregularities, poor living conditions and presuming to know what was best for the animal.

One vocal activist at the agitation’s forefront and to whose suggestions I had deferred when covering this story turned out to be a dentist in Mumbai, far removed from the rural reality that Ustad and the villagers co-habited as well as the opinions and priorities of conservationists about how Ustad should be handled. As I would later find out, almost all experts (excluding the two or three I’d spoken to) agreed Ustad had to be relocated and that doing so wasn’t as big a deal as the activists made it out to be, notwithstanding the irregularities.

I have never treated activists as experts since but many other publications continue to make the same mistake. There are many problems with this false equivalence, including the equation of expertise with amplitude, insofar as it pertains to scientific activity, for example conservation, climate change, etc. Another issue is that activists – especially those who live and work in a different area and who haven’t accrued the day-to-day experiences of those whose rights they’re shouting for – tend to make decisions on principle and disfavour choices motivated by pragmatic thinking.

Second, when some experts join forces with activists to render themselves or their possibly controversial opinions more visible, the journalist’s – and by extension the people’s – road to the truth becomes even more convoluted than it should be. Finally, of course, using activists in place of experts in a story isn’t fair to activists themselves: activism has its place in society, and it would be a disservice to depict activism as something it isn’t.

This alerts us to the challenge of maintaining a balancing act.

One of the trends of the 21st century has been the democratisation of information – to liberate it from technological and economic prisons and make it available and accessible to people who are otherwise unlikely to do so. This in turn has made many people self-proclaimed experts of this or that, from animal welfare to particle physics. And this in turn is mostly good because, in spite of faux expertise and the proliferation of fake news, democratising the availability of information (but not its production; that’s a different story) empowers people to question authority.

Indeed, it’s possible fake news is as big a problem as it is today because many governments and other organisations have deployed it as a weapon against the availability of information and distributed mechanisms to verify it. Information is wealth after all and it doesn’t bode well for authoritarian systems predicated on the centralisation of power to have the answers to most questions available one Google, Sci-Hub or Twitter search away.

The balancing act comes alive in the tension between preserving authority without imposing an authoritarian structure. That is, where do you draw the line?

For example, Eric Balfour isn’t the man you should be listening to to understand how killer whales interpret and exercise freedom (see tweet below); you should be speaking to an animal welfare expert instead. However, the question arises if the expert is hegemon here, furthering an agenda on behalf of the research community to which she belongs by delegitimising knowledge obtained from sources other than her textbooks. (Cf. scientism.)

This impression is solidified when scientists don’t speak up, choosing to remain within their ivory towers, and weakened when they do speak up. This isn’t to say all scientists should also be science communicators – that’s a strawman – but that all scientists should be okay with sharing their comments with the press with reasonable preconditions.

In India, for example, very, very few scientists engage freely with the press and the people, and even fewer speak up against the government when the latter misfires (which is often). Without dismissing the valid restrictions and reservations that some of them have – including not being able to trust many journalists to know how science works – it’s readily apparent that the number of scientists who do speak up is minuscule relative to the number of scientists who can.

An (English-speaking) animal welfare expert is probably just as easy to find in India as they might be in the US but consider palaeontologists or museologists, who are harder to find in India (sometimes you don’t realise that until you’re looking for a quote). When they don’t speak up – to journalists, even if not of their own volition – during a controversy, even as they also assert that only they can originate true expertise, the people are left trapped in a paradox, sometimes even branded fools to fall for fake news. But you can’t have it both ways, right?

These issues stem from two roots: derision and ignorance, both of science communication.

Of the scientists endowed with sufficient resources (including personal privilege and wealth): some don’t want to undertake scicomm, some don’t know enough to make a decision about whether to undertake scicomm, and some wish to undertake scicomm. Of these, scientists of the first type, who actively resist communicating research – whether theirs or others, believing it to be a lesser or even undesirable enterprise – wish to perpetuate their presumed authority and their authoritarian ‘reign’ by hoarding their knowledge. They are responsible for the derision.

These people are responsible at least in part for the emergence of Balfouresque activists: celebrity-voices that amplify issues but wrongly, with or without the support of larger organisations, often claiming to question the agenda of an unholy union of scientists and businesses, alluding to conspiracies designed to keep the general populace from asking too many questions, and ultimately secured by the belief that they’re fighting authoritarian systems and not authority itself.

Scientists of the second type, who are unaware of why science communication exists and its role in society, are obviously the ignorant.

For example, when scientists from the UK had a paper published in 2017 about the Sutlej river’s connection to the Indus Valley civilisation, I reached out to two geoscientists for comment, after having ascertained that they weren’t particularly busy or anything. Neither had replied after 48 hours, not even with a ‘no’. So I googled “fluvio-deltaic morphology”, picked the first result that was a university webpage and emailed the senior-most scientist there. This man, Maarten Kleinhans at the University of Utrecht, wrote back almost immediately and in detail. One of the two geoscientists wrote me a month later: “Please check carefully, I am not an author of the paper.”

More recently, the 2018 Young Investigators’ Meet in Guwahati included a panel discussion on science communication (of which I was part). After fielding questions from the audience – mostly from senior scientists already convinced of the need for good science communication, such as B.K. Thelma and Roop Malik – and breaking for tea, another panelist and I were mobbed by young biologists completely baffled as to why journalists wanted to interrogate scientific papers when that’s exactly why peer-review exists.

All of this is less about fighting quacks bearing little to no burden of proof and more about responding to the widespread and cheap availability of information. Like it or not, science communication is here to stay because it’s one of the more credible ways to suppress the undesirable side-effects of implementing and accessing a ‘right to information’ policy paradigm. Similarly, you can’t have a right to information together with a right to withhold information; the latter has to be defined in the form of exceptions to the former. Otherwise, prepare for activism to replace expertise.