Categories
Analysis Scicomm

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

Categories
Scicomm

The clocks that used atoms and black holes to stay in sync

You’re familiar with clocks. There’s probably one if you look up just a little, at the upper corner of your laptop or smartphone screen, showing you what time of day it is, allowing you to quickly grasp the number of daytime or nighttime hours, depending on your needs.

There some other clocks that are less concerned about displaying ‘clock time’ and more about measuring the passage of. These devices are useful for applications designed to understand this dimension in a deeper sense. The usefulness of these clocks also depends more strongly on the timekeeping techniques they employ.

For example, consider the caesium atomic clock. Like all clocks, it is a combination of three things: an oscillator, a resonator and a detector. The oscillator is a finely tuned laser that shines on an ultra-cold gas of caesium atoms in a series of pulses. If the laser has the right frequency, an electron in a caesium atom will absorb a corresponding photon, jump to a higher energy level before then jumping back to its original place by emitting radiation of exactly 9,192,631,770 Hz. This radiation is the resonator.

The detector will be looking for radiation of this frequency – and the moment it has detected 9,192,631,770 waves (from crest to trough), it will signal that one second has passed. This is also why, technically, a caesium clock can be used to measure out a nine-billionth of a second.

Scientists have need for even more precise clocks, clocks that use extremely stable resonators and, increasingly of late, clocks that combine both advantages. This is why scientists developed optical atomic clocks. The caesium atomic clock has a resonant frequency of 9,192,631,770 Hz, which lies in the microwave part of the electromagnetic spectrum. Optical atomic clocks use resonators that have a frequency in the optical part. This is much higher.

For example, physicists at the Inter-University Centre for Astronomy and Astrophysics and the Indian Institute of Science Education and Research, both Pune, are building clocks that use ytterbium and strontium ions, respectively, with resonator frequencies of 642,121,496,772,645 Hz and 429,228,066,418,009 Hz. So technically, these clocks can measure out 600-trillionths and 400-trillionths of a second, allowing scientists ultra-precise insights into how long very short-lived events really last or how closely theoretical predictions and experimental observations match up.

In fact, because we have not managed to measure 400-trillionths of a kilogram, of a metre or in fact of any other SI unit, time is currently the most precisely measured physical quantity ever.


Sometimes, scientists need to use multiple atomic clocks in the course of an experiment or to ascertain how synchronised they are. This is not a trivial exercise.

For example, say you have two clocks whose performance you need to compare. If they are simple digital clocks, you could check how precisely each one of them records the amount of time between, say, astronomical dawn and astronomical dusk (the moments when the Sun is 18º below the horizon before sunrise and after sunset, respectively). Here, you take the act of looking at each clock face for granted. If the clocks are right in front of you, light travels nearly instantaneously between your eye and the display. And because the clocks tick one second at a time, you can repeat the task of checking their synchronisation as often as you need to just by looking.

What do you do if you need to know how well two optical atomic clocks are matched up continuously and if they are separated by, say, a thousand kilometres? Scientists in Europe demonstrated one solution to this problem in 2015.

They had optical clocks in Paris and Braunschweig connected with fibre optic cables to a processing station in Strasbourg. The resonant frequency of each clock was encoded in a ‘transfer laser’ that was then beamed through the cables to Strasbourg, where a detector measured the two laser pulses to decode the relative beat of each clock in real-time. The total length of the fibre optic cables in this case was 1,415 km. With this “all-optical” setup plus signal processing techniques, the research team reported a precision of three parts in 10-19 after an averaging time of just 1,000 seconds – a cutting-edge feat.

But scientists are likely to need one step better, if only because they also anticipate that the advent of optical atomic clocks at facilities around the world is likely to lead to a redefinition of the SI unit of time. The second’s current definition – “the time duration of 9,192,631,770 periods of the radiation” emitted by electrons transitioning between two particular energy levels of a caesium-133 atom – originated in 1967, when microwave atomic clocks were the state of the art.

Today, optical atomic clocks have this honour – and because they are more stable and use a higher resonator frequency than their microwave counterparts, it only makes sense to update the definition of a second. When this happens, optical clocks around the world will have to speak to each other constantly to make sure what each of them is measuring to be one second is the same everywhere.

Some of these clocks will be a few hundred kilometres apart, and others a lot more. In fact, scientists have figured it would be useful to have a way for two optical atomic clocks located on different continents to be able to work with each other. This represents the current version of the coordination problem, and scientists in Europe and Japan recently demonstrated a solution. It involves astronomy, because astronomy has a similar problem.


Everything in the universe is constantly in motion, which means telling the position of one moving object from another – like that of Venus from Earth – is bound to be more complicated from the start than knowing where your friend lives in a different city.

But astronomers have still figured out a way to establish a fixed reference frame that provides useful information about the location of different cosmic objects through space and time. They call it the International Celestial Reference Frame (ICRF). Its centre is located at the barycentre of the Solar System – the point around which all the planets in the Solar System orbit. Each of its three axes points in the direction of groups of objects called defining sources.

Many of these objects are quasars. ‘Quasar’ is a portmanteau of ‘quasi-stellar’, and is the name of the region at the centre of a galaxy where there is a supermassive black hole surrounded by a highly energised disk of gas and dust. Quasars are as such extremely bright. Astronomers spotted the first of them because they showed up in radio-telescope data as previously unknown star-like sources of radio waves. Because each galaxy can technically have only one quasar each, the number of quasars in the sky is not very high (relatively speaking) and most quasars are also located at such great distances that the radio waves they emit become very weak by the time they reach Earth’s radio telescopes.

Different views of the antennae of the Giant Metre-wave Radio Telescope at Khodad, Maharashtra. Credit: NCRA-TIFR

So on Earth, physicists either use very powerful telescopes to detect them or a collection of telescopes that work together using a technique called very-long baseline interferometry (VLBI). The idea is elegant but the execution is complicated.

Say some process in the accretion disk around the black hole at the Milky Way’s centre emits radio waves into space. These waves propagate through the universe. At some point, after many thousands of years, they reach radio telescopes on Earth. Because the telescopes are located at vastly different locations, in Maharashtra, Canary Islands and Hawaii, say, they will each detect and measure the radio wave signals at slightly different points of time. There may also be slight differences in the waves’ characteristics because they are likely to have moved through different forms and densities of matter in their journey through space.

Computers combine the exact times at which the signals arrive at each telescope and the signals’ physical properties (like frequency, phase, etc.) with a sophisticated technique called cross-correlation to produce a better-resolved picture of the source that emitted them than if they had used data from only one telescope.

In fact, the resolving power of a radio telescope is proportional to the telescope’s baseline. If scientists are using only one telescope to make an observation, the baseline is equal to the dish’s diameter. But with VLBI radio astronomy, the baseline is equal to the longest distance between two telescopes in the array. This is why this technique is so powerful.

For example, to capture the first direct image of the black hole at the Milky Way’s centre, some 52,000 lightyears away, astronomers combined an array of eight telescopes located in North America, South America, Hawaii, Europe and the South Pole to form the Event Horizon Telescope. At any given time, the baseline would be determined by two telescopes that can observe the black hole simultaneously. And as Earth rotated, different pairs of telescopes would work together to keep observing the black hole even as their own view of the black hole would change.

Each telescope would record a signal together with a very precise timestamp, provided by an atomic clock installed at the same facility or nearby, in a hard-drive. Once an observing run ended, all the hard-drives would be shipped to a processing facility, where computers would combine the signal and time data from them to create an image of the source.

A diagram showing the location of the Event Horizon Telescope's participating telescopes on four continents.
Credit: EHT/Harvard University

As it happens, the image of the black hole the Event Horizon collaboration released in 2019 could have been available sooner if not for the fact that there are no flights from April to October from the South Pole. So astrophysics also has some coordination problems, but astrophysicists have been able to figure them out thanks to tools like VLBI. Perhaps it’s not surprising then that scientists have thought to use VLBI to solve optical atomic clocks’ coordination problem as well.

According to a paper published in July 2020, the current version of ICRF is the third iteration, was adopted on January 1, 2019, and uses 4,588 sources. Of these, the positions of exactly 500 sources – including some quasars – are known with “extreme accuracy”. Using this information, the European-Japanese team reversed the purpose of VLBI to serve atomic clocks.

Using VLBI to measure the positions and features of distant astronomical objects is called VLBI astrometry. Doing the same to measure distances on Earth, like the European-Japanese team has done, is called VLBI geodesy. In the former, astronomers use VLBI to reduce uncertainties about distant sources of radio waves by being as certain as possible about the distance between the telescopes (and other mitigating factors like atmospheric distortion). Flip this: if you are as certain as possible about the distance from Earth to a particular quasar, you can use VLBI to reduce uncertainties about the distance between two atomic clocks instead.

And the science and technologies we have available today have allowed astronomers to resolve details down to a few billionths of a degree in astrometry – and to a few millimetres in geodesy.

The European-Japanese team implemented the same idea. The team members used three radio telescopes. Two of them, located in Medicina (Italy) and Koganei (Japan), were small, with dishes of diameter 2.4 m, but with a total baseline of 8,700 km. The Medicina telescope was connected to a ytterbium optical atomic clock in Torino and the Koganei telescope to a strontium optical atomic clock in the same facility.

Credit: https://www.nature.com/articles/s41567-020-01038-6

First, the Torino clock’s resonator frequency was converted from the optical part of the spectrum to the microwave part using a device called a frequency comb, like in the schematic shown below.

Credit: NIST

(To quote myself from an older article: “A frequency comb is an advanced laser whose output radiation lies in multiple, evenly-spaced frequencies. This output can be used to convert high-frequency optical signals into more easily countable lower-frequency microwave signals.”)

This microwave frequency is transferred to a laser that is beamed through a fibre optic cable to the Medicina telescope. Similarly, at Koganei, the strontium clock’s resonator frequency is converted using a frequency comb to a corresponding microwave counterpart. At this point, both telescopes have time readings from optical atomic clocks in the form of more easily counted microwave radiation.

In the second step, the scientists used VLBI to determine as accurately as possible the time difference between the two telescopes. For this, the telescopes observed a quasar whose position was known to a high degree of accuracy in the ICRF system.

Since quasars are inherently far away and the two telescopes are quite small (as radio telescopes go), they were able to detect the quasar signal only weakly. To adjust for this, the team connected both telescopes via high-speed internet links to a large 34-m radio telescope in Kashima, also in Japan. This way, the team writes in its paper published in October 2020,

“the delay observable between the transportable stations can be calculated as the difference of the two delays with the large antenna after applying a small correction factor”.

Once the scientists had a delay figure, they worked backwards to estimate when exactly the two telescopes ought to have recorded their respective signals, based on which they could calculate the ratio of the microwave frequencies, and finally based on which they could calculate the ratio of the two clocks’ optical frequencies – autonomously, in real-time. To quote once again from the team’s paper:

“One node was installed at NICT headquarters in Koganei (Japan) while the other was transported to the Radio Astronomical Observatory operated by INAF in Medicina (Italy), forming an intercontinental baseline of 8,700 km. Observational data at Medicina and Koganei were stored on hard-disk drives at each station and transferred over high-speed internet networks to the correlation centre in Kashima for analysis. Ten frequency measurements were performed via VLBI between October 2018 and February 2019, and from these we calculated the frequency difference between the reference clocks at the two stations: the local hydrogen masers in Medicina and Koganei. Each session lasted from 28 h to 36 h and included at least 400 scans observing between 16 and 25 radio sources in the ICRF list.”

This way, they reported the ability to determine the frequency ratio with an uncertainty of 10-16 after ten-thousand seconds, and perhaps as low as 10-17 after a longer averaging time of ten days.

This is very good, but more importantly it’s better than the uncertainty arising from directly comparing the frequencies of two optical atomic clocks by relaying data through satellites. An uncertainty of 10-17 also means physicists can use multiple optical atomic clocks to study extremely slow changes, and potentially be confident about the results down to 0.00000000000000001 seconds.


The architecture of the solution also presents some unique advantages, as well as food for thought.

The setup effectively requires optical atomic clocks to be connected to small, even portable, radio telescopes as long as these telescopes are then connected to a larger one located somewhere else through a high-speed internet connection. These small instruments “can be operated without the need for a radio transmission licence,” the team writes in the paper, and “where laboratories lack the facilities or sky coverage to house a VLBI station, they can be connected by local optical-fibre links” like the one between Medicina and Torino.

The scientists have effectively used existing methods to solve a new problem instead of finding an altogether new solution. This isn’t to say new solutions are disfavoured but only that the achievement, apart from being relatively low cost and well-understood, is ingenious, and keeps the use of optical atomic clocks for all the applications they portend from becoming too resource-intensive.

It’s also fascinating that the clocks participating in this exercise are effectively a group of machines translating between processes playing out at two vastly different scales – one of minuscule electrons emitting tiny amounts of radiation over short distances and the other of radiation of similar provenance emerging from the exceedingly unique neighbourhoods of colossal black holes, travelling for many millennia at the speed of light through the cosmos.

Perhaps this was to be expected, considering the idea of using a clock is fundamentally a quest for a foothold, a way to translate the order lying at the intersection of seemingly chaotic physical processes, all directed by the laws of nature, to a metronome that the human mind can tick to.

Featured image: A simulation of a black hole from the 2014 film ‘Interstellar’. Source: YouTube.

Categories
Life notes

There is more than one thunder

Sunny Kung, a resident in internal medicine at a teaching hospital in the US, has authored a piece in STAT News about her experience dealing with people with COVID-19, and with other people who deal with people with COVID-19. I personally found the piece notable because it describes a sort of experience of dealing with COVID-19 that hasn’t had much social sanction thus far.

That is, when a socio-medical crisis like the coronavirus pandemic strikes, the first thing on everyone’s minds is to keep as few people from dying as possible. Self-discipline and self-sacrifice, especially among those identified as frontline healthcare and emergency services workers, become greater virtues than even professional integrity and the pursuit of individual rights. As a result, these workers incur heavy social, mental and sometimes even physical costs that they’re not at liberty to discuss openly without coming across as selfish at a time when selflessness is precariously close to being identified as a fundamental duty.

Kung’s piece, along with some others like it, clears and maintains a precious space for workers like her to talk about what they’re going through without being vilified for it. Further, I’m no doctor, nurse or ambulance driver but ‘only’ a journalist, so I have even less sanction to talk about my anxieties than a healthcare worker does without inviting, at best, a polite word about the pandemic’s hierarchy of priorities.

But as the WHO itself has recognised, this pandemic is also an ‘infodemic’, and the contagion of fake news, misinformation and propaganda is often deadly – if not deadlier – than the effects of the virus itself. However, the amount of work that me and my colleagues need to do, and which we do because we want to, to ensure what we publish is timely, original and verified often goes unappreciated in the great tides of information and data.

This is not a plea for help but an unassuming yet firm reminder that:

  1. Emergency workers come in different shapes, including as copy-editors, camerapersons and programmers – all the sort of newsroom personnel you never see but which you certainly need;
  2. Just because it’s not immediately clear how we’re saving lives doesn’t mean our work isn’t worth doing, or that it’s easy to do; and
  3. Saving lives is not the only outcome that deserves to be achieved during a socio-medical crisis.

A lot of what a doctor like Kung relates to, I can as well – and again, not in an “I want to steal your thunder” sort of way but as if this is a small window through which I get to shout “There are many thunders” sort of way. For example, she writes,

Every night during the pandemic I’ve dreaded showing up to work. Not because of fear of contracting Covid-19 or because of the increased workload. I dread having to justify almost every one of my medical decisions to my clinician colleagues.

Since the crisis began, I’ve witnessed anxiety color the judgement of many doctors, nurses, and other health care workers — including myself — when taking care of patients.

Many of us simply want to make sure we’re doing the right thing and to the best of our ability, that to the extent possible we’re subtracting the effects of fatigue and negligence from a situation rife with real and persistent uncertainty. But in the process, we’re often at risk of doing things we shouldn’t be doing.

As Kung writes, doctors and nurses make decisions out of fear – and journalists cover the wrong paper, play up the wrong statistic, quote the wrong expert or pursue the wrong line of inquiry. Kung also delineates how simply repeating facts, even to nurses and other medical staff, often fails to convince them. I often go through the same thing with my colleagues and with dozens of freelancers every week, who believe ‘X’ must be true and want to anticipate the consequences of ‘X’ whereas I, being more aware of the fact that the results of tests and studies are almost never 100% true (often because the principles of metrology themselves impose limits on confidence intervals but sometimes because the results depend strongly on the provenance of the input data and/or on the mode of publishing), want to play it safe and not advertise results that first seed problematic ideas in the minds of our readers but later turn out to be false.

So they just want to make sure, and I just want to make sure, too. Neither party is wrong but except with the benefit of hindsight, neither party is likely to be right either. I don’t like these conversations because they’re exhausting, but I wouldn’t like to abdicate them because it’s my responsibility to have them. And what I need is for this sentiment to simply be acknowledged. While I don’t presume to know what Kung wants to achieve with her article, it certainly makes the case for everyone to acknowledge that frontline medical workers like her have issues that in turn have little to do with the fucking virus.

In yet another reminder that the first six months (if not more) of 2020 will have been the worst infodemic in history, I can comfortably modify the following portions of Kung’s article…

They were clearly disgruntled about my decision not to transfer Mr. M to the ICU. I tried to reassure them by providing evidence, but I could still feel the tension and fear. The nurses wanted another M.D. to act as an arbiter of my decision but were finally convinced after I cited the patient’s stable vital signs, laboratory results, and radiology findings.

Everyone in the hospital is understandably on edge. Uncertainty is everywhere. Our hospital’s policies have been constantly changing about who we should test for Covid-19 and when we should wear what type of protective personal equipment. Covid-19 is still a new disease to many clinicians. We don’t know exactly which patients should go to the ICU and which are stable enough to stay on the regular floor. And it is only a matter of time before we run out of masks and face shields to protect front-line health care workers. …

As a resident in internal medicine and a future general internist, it is my duty to take care of these Covid-19 patients and reassure them that we are here to support them. That’s what I expect to do for all of my patients. What I did not expect from this pandemic is having to reassure other doctors, nurses, and health care workers about clinical decisions that I would normally never need to justify. …

There is emerging literature on diagnosing and treating Covid-19 patients that is easily accessible to physicians and nurses, but some of them are choosing to make their medical decisions based on fear — such as pushing for unnecessary testing or admission to the hospital, which may lead to overuse of personal protective equipment and hospital beds — instead of basing decisions on data or evidence. …

… thus:

The freelancer was clearly disgruntled about my decision not to accept the story for publication. I tried to reassure them by providing evidence, but I could still feel the tension and resentment. The freelancer wanted another editor to act as an arbiter of my decision but was finally convinced after I cited the arguments’ flaws one by one.

Every reporter is understandably on edge. Uncertainty is everywhere. Our newsroom’s policies have been constantly changing about what kind of stories we should publish, using what language and which angles we should avoid. Covid-19 is still a new disease to many journalists. We don’t know exactly which stories are worth pursuing and which are stable enough to stay on the regular floor. And it is only a matter of time before we run low on funds and/or are scooped. …

As a science editor, it is my duty to look out for my readers and reassure them that we are here to support them. That’s what I expect to do for all of my readers. What I did not expect from this pandemic is having to reassure other reporters, editors, and freelancers about editorial decisions that I would normally never need to justify. …

There is emerging literature on diagnosing and treating Covid-19 patients that is easily accessible to reporters and editors, but some of them are choosing to make their editorial decisions to optimise for sensationalism or speed — such as composing news reports based on unverified claims, half-baked data, models that are “not even wrong” or ideologically favourable points of view, which may lead readers to under- or overestimate various aspects of the pandemic — instead of basing decisions on data or evidence. …

More broadly, I dare to presume frontline healthcare workers already have at least one (highly deserved) privilege that journalists don’t, and in fact have seldom had: the acknowledgment of the workload. Yes, I want to do the amount of work I’m doing because I don’t see anyone else being able to do it anytime soon (and so I even take pride in it) but it’s utterly dispiriting to be reminded, every now and then, that the magnitude of my commitment doesn’t just languish in society’s blindspot but is often denied its existence.

Obviously very little of this mess is going to be cleaned up until the crisis is past its climax (although, like ants on a Möbius strip, we might not be able to tell which side of the problem we’re on), at which point the world’s better minds might derive lessons for all of us to learn from. At the same time, the beautiful thing about acknowledgment is that it doesn’t require you to determine, or know, if what you’re acknowledging is warranted or not, whether it’s right or wrong, even as the acknowledgment itself is both right and warranted. So please do it as soon as you can, if only because it’s the first precious space journalists need to clear and maintain.

Categories
Scicomm

The nomenclature of uncertainty

The headline of a Nature article published on December 9 reads ‘LIGO black hole echoes hint at general relativity breakdown’. The article is about the prediction of three scientists that, should LIGO find ‘echoes’ of gravitational waves coming from blackhole-mergers, then it could be a sign of quantum-gravity forces at play.

It’s an exciting development because it presents a simple and currently accessible way of probing the universe for signs of phenomena that show a way to unite quantum physics and general relativity – phenomena that have been traditionally understood to be outside the reach of human experiments until LIGO.

The details of the pre-print paper the three scientists uploaded on arXiv were covered by a number of outlets, including The Wire. And The Wire‘s and Forbes‘s headlines were both questions: ‘Has LIGO already discovered evidence for quantum gravity?’ and ‘Has LIGO actually proved Einstein wrong – and found signs of quantum gravity?’, respectively. Other headlines include:

  • Gravitational wave echoes might have just caused Einstein’s general theory of relativity to break down – IB Times
  • A new discovery is challenging Einstein’s theory of relativity – Futurism
  • Echoes in gravitational waves hint at a breakdown of Einstein’s general relativity – Science Alert
  • Einstein’s theory of relativity is 100 years old, but may not last – Inverse

The headlines are relevant because: Though the body of a piece has the space to craft what nuance it needs to present the peg, the headline must cut to it as quickly and crisply as possible – while also catching the eye of a potential reader on the social media, an arena where all readers are being inundated with headlines vying for attention.

For example, with the quantum gravity pre-print paper, the headline has two specific responsibilities:

  1. To be cognisant of the fact that scientists have found gravitational-wave echoes in LIGO data at the 2.9-sigma level of statistical significance. Note that 2.9 sigma is evidently short of the threshold at which some data counts as scientific evidence (and well short of that at which it counts as scientific fact – at least in high-energy physics). Nonetheless, it still presents a 1-in-270 chance of, as I’ve become fond of saying, an exciting thesis.
  2. To make reading the article (which follows from the headline) seem like it might be time well spent. This isn’t exactly the same as catching a reader’s attention; instead, it comprises catching one’s attention and subsequently holding and justifying it continuously. In other words, the headline shouldn’t mislead, misguide or misinform, as well as remain constantly faithful to the excitement it harbours.

Now, the thing about covering scientific developments from around the world and then comparing one’s coverage to those from Europe or the USA is that, for publications in those countries, what an Indian writer might see as an international development is in fact a domestic development. So Nature, Scientific American, Forbes, Futurism, etc. are effectively touting local accomplishments that are immediately relevant to their readers. The Wire, on the other hand, has to bank on the ‘universal’ aspect and by extension on themes of global awareness, history and the potential internationality of Big Science.

This is why a reference to Einstein in the headline helps: everyone knows him. More importantly, everyone was recently made aware of how right his theories have been since they were formulated a century ago. So the idea of proving Einstein wrong – as The Wire‘s headline read – is eye-catching. Second, phrasing the headline as a question is a matter of convenience: because the quasi-discovery has a statistical significance of only 2.9 sigma, a question signals doubt.

But if you argued that a question is also a cop-out, I’d agree. A question in a headline can be interpreted in two ways: either as a question that has not been answered yet but ought to be or as a question that is answered in the body. More often than not and especially in the click-bait era, question-headlines are understood to be of the latter kind. This is why I changed The Wire copy’s headline from ‘What if LIGO actually proved Einstein wrong…’ to ‘Has LIGO actually proved Einstein wrong…’.

More importantly, the question is an escapism at least to me because it doesn’t accurately reflect the development itself. If one accounts for the fact that the pre-print paper explicitly states that gravitational-wave echoes have been found in LIGO data only at 2.9 sigma, there is no question: LIGO has not proved Einstein wrong, and this is established at the outset.

Rather, the peg in this case is – for example – that physicists have proposed a way to look for evidence of quantum gravity using an experiment that is already running. This then could make for an article about the different kinds of physics that rule at different energy levels in the universe, and what levels of access humanity has to each.

So this story, and many others like it in the past year that all dealt with observations falling short of the evidence threshold but which have been worth writing about simply because of the desperation behind them, have – or could have – prompted science writers to think about the language they use. For example, the operative words/clause in the respective headlines listed above are:

  • Nature – hint
  • IB Times – might have just caused
  • Futurism – challenging
  • Science Alert – hint
  • Inverse – may not

Granted that an informed skepticism is healthy for science and that all science writers must remain as familiar with this notion as with the language of doubt, uncertainty, probability (and wave physics, it seems). But it still is likely the case that writers grappling with high-energy physics have to be more familiar than others, dealing as the latest research does with – yes – hope and desperation.

Ultimately, I may not be the perfect judge of what words work best when it comes to the fidelity of syntax to sentiment; that’s why I used a question for a headline in the first place! But I’m very interested in knowing how writers choose and have been choosing their words, if there’s any friction at all (in the larger scheme) between the choice of words and the prevailing sentiments, and the best ways to deal with such situations.

PS: If you’re interested, here’s a piece in which I struggled for a bit to get the words right (and finally had to resort to using single-quotes).

Featured image credit: bongonian/Flickr, CC BY 2.0