Rapid rotation explains unusual stability of C2 anion

In various settings, including chemical reactions in the lab, inside nuclear reactors, and in outer space, scientists have found C2 anions living for as long as three milliseconds before decaying to a more stable state — and they haven’t been able to explain why. Normally particles, atoms, and molecules make these transitions to lose energy and become more stable. And normally the C2 molecule has around 4 eV less energy than the C2 anion, so the latter decayed to the former within one-trillionth of a second. The puzzle was that scientists didn’t know of a mechanism that allowed C2 to not decay to C2 for more than 3 ms, a timespan more than a billion-times longer.

In two new papers published on October 31 (here and here), researchers from Austria, the Czech Republic, and Germany reported “strong evidence” for an idea scientists first had in the late 1990s: that the delay had something to do with rotation. Scientists have previously found rapidly rotating molecules in space — including when radiation breaks up water molecules in the interstellar medium and in the dynamic neighbourhood of a newborn star.

The study team found that when the C2 complex rotates fast enough to increase its rotation quantum number 𝑁 beyond 155, it acquires a “centrifugal potential” that rearranges the lower-energy states to which C2 can decay. In particular, the team’s theoretical calculations revealed that a different state other than the C2 state to which it normally decays has lower energy, and dropping to the C2 state becomes unfavourable. More specifically, if the C2 anion had 𝑁 values in the 165-183 range, the normal decay to C2 requires electrons to have at least six units of angular momentum. If 𝑁 is lower than 165, the rearrangement of energy states doesn’t forbid the rapid drop to C2.

In other words, the spinning molecule spits out a spinning electron to move to a more stable configuration — and even then not before living to the ripe old age of 3 ms. This so-called rotation-assisted stability of the C2 anion isn’t entirely new. Other scientists have previously found dihydrogen and dideuterium anions (H2 and D2) to be more stable as well when 𝑁 = 20-40. Using and theory and experiments, the European team found C2 acquired the same stability gain at 𝑁 of 155 or more because it’s heavier and has a higher rotational constant (“a fundamental parameter describing the rotational energy levels of a molecule,” per Meta AI).

The nanoscopes setting the stage for AI in biology

Förster resonance energy transfer (FRET) is a phenomenon in which one molecule transfers electronic energy to another molecule located some distance away. The transfer happens via the small electric field associated with the first molecule interacting with the field of the second.

FRET microscopy takes advantage of this phenomenon to observe objects smaller than 10 nm. When molecules called fluorophores are hit with electromagnetic radiation, they absorb it and glow with light of a lower frequency (including visible light); the latter is called fluorescence. Some fluorophores can be placed in a molecular setting and irradiated. FRET microscopy then estimates where other fluorophores are located, and thus where the parts of the molecule holding them are located, based on how the known fluorophores light them up through FRET.

For the last half century or so, FRET microscopy has been the optical technique of choice to understand how close one molecule is to another and to observe intramolecular interactions. But this may change now with a new method reported by researchers at the Max Planck Institute for Multidisciplinary Sciences in Göttingen, in the journal Science.

For its usefulness, FRET microscopy has many important limitations. Chief among them is that it’s an indirect way to measure distances, based on two electric fields interacting with each other to cause secondary fluorescence. If the fields are oriented relative to each other in a less than ideal way or if an electric field from a different source gets in the way, the output quality is diminished.

The new method offers to get rid of these confounding factors and produce a more direct result. It’s based on MINFLUX (short for ‘minimal fluorescence photon flux microscopy’), a technique the same researchers had reported in 2016.

FRET microscopy works better if more fluorophores light up, which means more light is required to first illuminate them. MINFLUX goes the other way. It also begins by embedding fluorophores in a sample, then shining a small dose of UV light on it. But this light beam is shaped like a donut, with zero intensity at the centre and increasing towards the outer edge. When the beam irradiates a probing area, the fluorophores at different locations light up to different extents based on the intensity of the light they receive. Based on this pattern, MINFLUX starts a second scan, this time tightening its beam to increase the odds of lighting up some fluorophores over others. The fluorophores are bound to molecules, so successive tightening basically zeroes in on individual molecules and the distances between them.

Credit: AByolia/Wikimedia Commons, CC BY-SA 4.0

Because the tightening requires fewer fluorophores to light up each time, MINFLUX is more sensitive than FRET. According to Meta AI (subsequently verified on Wikipedia), “the Cramér-Rao limit* provides a fundamental limit on the accuracy of parameter estimation”. For MINFLUX, this means its precision limit is directly proportional to the width of the scanning beam and inversely proportional to the square root of the fluorescent light’s intensity. So the wider the beam and the less the fluorophores light up, the better MINFLUX will perform.

Indeed, with the right equipment and amount of computing resources, the beam can be moved just a few nanometres at a time and locate a molecule within around five microseconds (i.e., five-millionths of a second).


Aside: The ‘Rao’ in the Cramér-Rao limit is CR Rao, the rockstar statistician from Bellary whose work influenced fields ranging from risk analysis to quantum physics. He passed away in August 2023. The Cramér-Rao bound arose in a paper Rao published in 1945 entitled ‘Information and accuracy attainable in the estimation of statistical parameters’, when he was only 25. Another result from the same work became the Rao-Blackwell theorem.


The new method reported in Science upgrades MINFLUX using better fluorophores. Fluorophores are molecules, too, and if they’re too close to each other, they could interact with each other. Microscopy methods based on fluorophores assume they operate independently of each other and respond only to the irradiating light. MINFLUX challenges this assumption by expecting the fluorophores to work independently even when they’re less than 10 nm of each other — and this may not always be the case. The researchers therefore used a fluorophore that would be chemically inert around others of its kind and respond only to the excitation beam.

When they tested their idea, the researchers found their version of MINFLUX could measure distances as small as 0.1 nm — about the width of a single phosphorus atom — and as large as 12 nm, an unusually wide range for a microscope operating at this scale.

MINFLUX is an example of super-resolution microscopy, the label for methods that use light while managing to achieve resolutions beyond the diffraction limit. In the late 19th century, German engineer Ernst Karl Abbe reported that the smallest distance a microscope can resolve is limited to the wavelength of the irradiating light divided by twice the numerical aperture. As a result, optical microscopes were limited for a long time to resolving distances no less than 2 micrometres.

From the late 1980s, scientists began to develop the new techniques that would become super-resolution microscopy, work for which Eric Betzig, Stefan Hell, and William Moerner were awarded the chemistry Nobel Prize in 2014. This trio (plus many others of course) beat the diffraction limit using fluorescence microscopy.

One of its earliest versions was stimulated emission depletion microscopy, conceived by Hell. A swath of fluorescent molecules was lit up in a sample before a second beam allowed the light from all molecules except those in a small patch at the centre to fade out — somewhat like collecting a large mass of water and draining all but the fish. (The draining allows the microscope to sidestep the diffraction limit.) Hell is also one of the authors of the new study.

Other super-resolution microscopy methods include PALM/STORM and PAINT. PALM, co-developed by Bertzig, reconstructs a large molecule’s shape by progressively activating a small number of fluorophores in random patterns until all of them have fluoresced. STORM is PALM with different fluorophores. PAINT localises molecules by allowing fluorophores to randomly attach with and detach from them, producing blinks of light that guide a computer to the molecules. All these methods, including MINFLUX, are supplemented with bespoke single-molecule localisation microscopy algorithms.

That chemistry Nobel Prizes recognised the advent of super-resolution microscopy and AlphaFold 2 precisely 10 years apart may be no more than a reflection of our choice of a number system (as Kip Thorne quipped here) but the two enterprises are closely, inextricably linked. The new AI-based tools to predict the structure of proteins reinforce the importance of the microscopy techniques, both to generate better data with which to train AI models and to verify the models’ predictions with real-world data.

Why having diverse interests is a virtue

As illustrated by the Marx-Ling-Brown dispute over that Canadaland podcast and Israel’s violence in West Asia

Paris Marx’s recent experience on the Canadaland podcast alerted me to the importance of an oft-misunderstood part of journalism in practice. When Paris Marx and his host Justin Ling were recording the podcast, Marx said something about Israel conducting a genocide in Gaza. After the show was recorded, the publisher of Canadaland, a fellow named Jesse Brown, edited that bit out. When Marx as well as Ling complained, Brown reinstated the comment by having Marx re-record it to attribute that claim to some specific sources. Now, following Marx’s newsletter and Ling’s statement about Brown’s actions, Brown has been saying on X that Marx’s initial comment, that many people have been saying Israel is conducting a genocide in Gaza, wasn’t specific enough and that it needed to have specific sources.

Different publications have different places where they draw the line on how much they’d like their content to be attributed. And frankly, there’s nothing wrong, unfair or unethical about this. As the commentary and narratives around Israel’s violence in West Asia have alerted us, the facts as we consider them are often not set in stone even when they have very clear definitions. We’re seeing in an obnoxious way (from our perspective) many people disputing the claim that Israel is conducting a genocide and contesting whether Israel’s actions can be constituted a genocide is a fact. Depending on the community to and for which you are being a journalist, it becomes okay for some things to be attributed to no one and just generally considered true, and for others not so much.

This is fundamentally because each one of us has a different level of access to all the relevant information as well as because the existence of facts other than those that we can experience through our senses (i.e. empirically) is controlled by some social determinants as well.

This whole Canadaland episode alerted me the people trying to repudiate the allegation that Israel is conducting a genocide — especially many who are journalists by vocation — by purporting to scrutinise the claims they are being presented with. Now, scrutiny in and of itself is a good thing; it’s one of the cornerstones of scepticism, especially a reasonable exercise of scepticism. But what they’re scrutinising also matters, and which is a subjective call. I use the word ‘subjective’ with deliberate intent. Scrutiny in journalism is a good thing (I’m treating Canadaland as a journalistic outlet here), yet it’s important to cultivate a good sense of what can and ought to be scrutinised versus a scrutiny of something that only suggests the scrutiniser is being obstinate or intends to waste time.

Many, if not all, journalists would have started off being told it’s important to be alert, to be aware of scrutinising all the claims they encounter. Many journalists also cultivate this sense over time, and the process by which they do so allows subjective considerations to seep in — and that is not in and of itself a bad thing. In fact it’s good. I have often come across editors who have predicted a particular story’s popularity where others only saw a dud based solely on their news sense. This is not a clinical scientific technique, it’s by all means a sense. Informing this sense are, among other things, the pulse of the people to whom you’re trying to appeal, the things they value, the things they used to value but don’t any more, and so forth. In other words this sense or pulse has an important socio-cultural component to it, and it is within this milieu that scrutiny happens.

Scrutinising something in and of itself is not always a virtue for this reason: in the process of scrutinising something, it’s possible for you to end up appealing to things that people don’t consider virtues or, worse, which they could interpret to mean you’re vouching for something they consider antithetical to their spirit as a people.

This Marx-Ling-Brown incident is illustrative to the extent that it spotlights the many journalists waking up to a barrage of statements, claims, and assertions both on and off the internet that Israel is conducting a genocide in Gaza. These claims are stinging them, cutting at the heart of something they value, something they hold close to their hearts as a community. So they’re responding by subjecting these claims to some tough scrutiny. Many of us have spent many years applying the same sort of tests to many, many other claims. For example, science journalists had to wade through a lot of bullshit before we could surmount the tide of climate denialism and climate pacifism to get to where we are today.

However, now we’re seeing these other people, including journalists, subjecting of all things the claim that Israel is conducting a genocide in Gaza to especial scrutiny. I think they’re waking up to the importance of scepticism and scrutiny through this particular news incident. Many of us woke up before, and many of us will wake up in future, through specific incidents that are close to us, that we know more keenly than most others will have a sort of very bad effect on society. These incidents are a sort of catalyst but they are also more than that — a kind of awakening.

You learn how to scrutinise things in journalism school, you understand the theory of it very quickly. It’s very simple. But in practice, it’s a different beast. They say you need to fact check every claim in a reporter’s copy. But over time, what you do is you draw the line somewhere and say, “Beyond this point, I’m not going to fact check this copy because the author is a very good reporter and my experience has been that they don’t make any statements or claims that don’t stand up to scrutiny beyond a particular level.” You develop and accrue these habits of journalism in practice because you have to. There are time constraints and mental bandwidth constraints, so you come up with some shortcuts. This is a good thing, but acknowledging this is also important and valuable rather than sweeping it under the rug and pretending you don’t do it.

If you want to be a good journalist, you have to cultivate for yourself the right conduits of awakening — and by “right” I mean those conduits that will awaken you to the pulses of the people and the beats you’re responsible for rather than serve some counterproductive purpose. These conduits should specifically do two things. One: they should awaken you as quickly and with as much clarity as possible to what it means to fact check or scrutinise something. It should teach you the purpose of it, why you do it. It should teach you what good scrutiny looks like and where the line is between scrutiny and nitpicking or pedantry. Two: it should alert you to, or alert others about, your personal sense of right and wrong, good and bad. That’s why it’s a virtue to cultivate as many conduits as possible, that is to have diverse interests.

When we’re interested in many things about the world, about the communities and the societies that we live in, we are over time awakened again and again. We learn how to subject different claims to different levels of scrutiny because that experience empirically teaches us what, when, and how to scrutinise and, importantly, why. Today we’re seeing many of these people wake up and subject the tests that we’ve administered to climate denialism, the anti-vaccine movement, and various other pseudo-scientific movements to the claim that Israel is conducting a genocide. When we look at them we see stubborn people who won’t admit simple details that are staring us in the face. This disparity arises because of how we construct our facts, the virtues to which we would like to appeal, and the position of the line beyond which we say no further attribution is necessary.

Obviously there is no such thing as the view from nowhere, and I’m clear that I’m almost always appealing to the people who are not right-wingers. So from where I’m standing it seems more often than not as if the tests being administered to, say, the anti-vaccine movement are more valid instances of their use than the tests being administered against claims that Israel is conducting a genocide.

Such divisions arise when we don’t cultivate ourselves as individuals, when we don’t nurture ourselves and the things that we’re interested in. Simply, it speaks to the importance of having diverse interests. It’s like traveling the world, meeting many people, experiencing many cultures. Such experiences teach us about multiculturalism and why it’s valuable, and they teach us the precise ways in which xenophobia, authoritarianism, and nationalism effect their baleful consequences. In a very similar way, diverse interests are good teachers about the moral landscape we all share and its normative standards that we co-define. They can quickly teach you about how far you stand from where you might really like to be.

In fact, it’s entirely possible for a right-winger to read this post and take away the idea that where they stand is right. As I said, there is no view from nowhere. Right and wrong depend on your vantage point, in most cases at least. I wanted to put these thoughts down because it seemed like people who may not have many interests or who have very limited interests are people also more likely to disengage from social issues earlier than others. Disengagement is the fundamental problem, the root cause. There are many reasons for why it arises in the first place, but getting rid of it is entirely possible, and importantly something we need to do. And a good way to do it is to cultivate many interests, to be interested in many problems, so that over time our experiences navigating those interests inevitably lead to a good sense of what we should and what we needn’t have to scrutinise. It will teach us why some particular points of an argument are ill-founded. And if we’re looking for it, it will give us a chance to fix that and even light the way.

PSA about Business Today

If you get your space news from the website businesstoday.in, this post is for you. Business Today has published several articles over the last few weeks about the Starliner saga with misleading headlines and claims blown far out of proportion. I’d been putting off writing about them but this morning, I spotted the following piece:

Business Today has produced all these misleading articles in this format, resembling Instagram reels. This is more troubling because we know tidbits like this are more consumable as well as are likely to go viral by virtue of their uncomplicated content and simplistic message. Business Today has also been focusing its articles on the saga on Sunita Williams alone, as if the other astronauts don’t exist. This choice is obviously of a piece with Williams’s Indian heritage and Business Today’s intention to maximise traffic to its pages by publishing sensational claims about her experience in space. As I wrote before:

… in the eyes of those penning articles and headlines, “Indian-American” she is. They’re using this language to get people interested in these articles, and if they succeed, they’re effectively selling the idea that it’s not possible for Indians to care about the accomplishments of non-Indians, that only Indians’, and by extension India’s, accomplishments matter. … Calling Williams “Indian-American” is to retrench her patriarchal identity as being part of her primary identity — just as referring to her as “Indian origin” is to evoke her genetic identity…

But something more important than the cynical India connection is at work here: in these pieces, Business Today has been toasting it. This my term for a shady media practice reminiscent of a scene in an episode of the TV show Mad Men, where Don Draper suggests Lucky Strike should advertise its cigarettes as being “toasted”. When someone objects that all cigarettes are toasted, Draper says they may well be, but by saying publicly that its cigarettes are toasted, Lucky Strike will set itself out without doing anything new, without lying, without breaking any rules. It’s just a bit of psychological manipulation.

Similarly, Business Today has been writing about Williams as if she’s the only astronaut facing an extended stay in space (and suggesting in more subtle ways that this fate hasn’t befallen anyone before — whereas it has dozens of times), that NASA statements concern only her health and not the health of the other astronauts she’s with, and that what we’re learning about her difficulties in space constitute new information.

None of this is false but it’s not true either. It’s toasted. Consider the first claim: “NASA has revealed that Williams is facing a critical health issue”:

* “NASA has revealed” — there’s nothing to reveal here. We already know microgravity affects various biochemical processes in the body, including the accelerated destruction of red blood cells.

* “Williams is facing” — No. Everyone in microgravity faces this. That’s why astronauts need to be very fit people, so their bodies can weather unanticipated changes for longer without suffering critical damage.

* “critical health issue” — Err, no. See above. Also, perhaps in a bid to emphasise this (faux) criticality, Business Today’s headline begins “3 million per second” and ends calling the number “disturbing”. You read it, this alarmingly big number is in your face, and you’re asking to believe it’s “disturbing”. But it’s not really a big number in context and certainly not worth any disturbance.

For another example, consider: “Given Williams’ extended mission duration, this accelerated red blood cell destruction poses a heightened risk, potentially leading to severe health issues”. Notice how Business Today doesn’t include three important details: how much of an extension amounts to a ‘bad’ level of extension, what the odds are of Williams (or her fellow Starliner test pilot Barry Wilmore) developing “health issues”, and whether these consequences are reversible. Including these details would deflate Business Today’s ‘story’, of course.

If Business Today is your, a friend’s and/or a relative’s source of space news, please ask them to switch to any of the following instead for space news coverage and commentary that’s interesting without insulting your intelligence:

* SpaceNews

* Jeff Foust

* Marcia Smith

* Aviation Week

* Victoria Samson

* Jatan Mehta

* The Hindu Science

A spaceflight narrative unstuck

“First, a clarification: Unlike in Gravity, the 2013 film about two astronauts left adrift after space debris damages their shuttle, Sunita Williams and Butch Wilmore are not stuck in space.”

This is the first line of an Indian Express editorial today, and frankly, it’s enough said. The idea that Williams and Wilmore are “stuck” or “stranded” in space just won’t die down because reports in the media — from The Guardian to New Scientist, from Mint to Business Today — repeatedly prop it up.

Why are they not “stuck”?

First: because “stuck” implies Boeing/NASA are denying them an opportunity to return as well as that the astronauts wish to return, yet neither of which is true. What was to be a shorter visit has become a longer sojourn.

This leads to the second answer: Williams and Wilmore are spaceflight veterans who were picked specifically to deal with unexpected outcomes, like what’s going on right now. If amateurs or space tourists had been picked for the flight and their stay at the ISS had been extended in an unplanned way, then the question of their wanting to return would arise. But even then we’d have to check if they’re okay with their longer stay instead of jumping to conclusions. If we didn’t, we’d be trivialising their intention and willingness to brave their conditions as a form of public service to their country and its needs. We should think about extending the same courtesy to Williams and Wilmore.

And this brings us to the third answer: The history of spaceflight — human or robotic — is the history of people trying to expect the unexpected and to survive the unexpectable. That’s why we have test flights and then we have redundancies. For example, after the Columbia disaster in 2003, part of NASA’s response was a new protocol: that astronauts flying in faulty space capsules could dock at the ISS until the capsule was repaired or a space agency could launch a new capsule to bring them back. So Williams and Wilmore aren’t “stuck” there: they’re practically following protocol.

For its upcoming Gaganyaan mission, ISRO has planned multiple test flights leading up the human version. It’s possible this flight or subsequent ones could throw up a problem, causing the astronauts within to take shelter at the ISS. Would we accuse ISRO of keeping them “stuck” there or would we laud the astronauts’ commitment to the mission and support ISRO’s efforts to retrieve them safely?

Fourth: “stuck” or “stranded” implies a crisis, an outcome that no party involved in the mission planned for. It creates the impression human spaceflight (in this particular mission) is riskier than it is actually and produces false signals about the competencies of the people who planned the mission. It also erects unreasonable expectations about the sort of outcomes test flights can and can’t have.

In fact, the very reason the world has the ISS and NASA (and other agencies capable of human spaceflight) has its protocol means this particular outcome — of the crew capsule malfunctioning during a flight — needn’t be a crisis. Let’s respect that.

Finally: “Stuck” is an innocuous term, you say, something that doesn’t have to mean all that you’re making it out to be. Everyone knows the astronauts are going to return. Let it go.

Spaceflight is an exercise in control — about achieving it to the extent possible without also getting in the way of a mission and in the way of the people executing it. I don’t see why this control has to slip in the language around spaceflight.

Did we see the conspiracies coming?

Tweets like this seem on point…

https://twitter.com/NBPTROCKS/status/1818407265335930923

… but I’ve started to wonder if we’re missing something in the course of expressing opinions about what we thought climate deniers would say and what they’re actually saying. That is, we expected to be right about what we thought they’d say but we’ve found ourselves wrong. Should we lampoon ourselves as well? Or, to reword the cartoon:

How we imagined we could react when ‘what we imagined deniers would say when the climate catastrophes came’ came true: “I was so right! And now everyone must pay for their greed and lies! May god have mercy on their soul!”

Followed by:

How we expect we’ll react when we find out ‘what they actually are saying’: “I was so wrong! And now everyone must pay for my myopia and echo chambers! May god have mercy on my soul!”

And finally:

How we actually are reacting: “We’re just using these disasters as an excuse to talk about climate change! Like we did with COVID! And 9/11! And the real moon landings! Screw you and your federal rescue money! You need to take your electric vegan soy beans now!”

People (myself included) in general aren’t entirely effective at changing others’ attitudes so it may not seem fair to say there’s a mistake in us not having anticipated how the deniers would react, that we erred by stopping short of understanding really why climate denialism exists and addressing its root cause. But surely the latter sounds reasonable in hindsight? ‘Us versus them’ narratives like the one in the cartoon describe apparent facts very well but they also reveal a tendency, either on the part of ‘us’ or of ‘them’ but often of both, to sustain this divide instead of narrowing it.

I’m not ignorant of the refusal of some people to change their mind under any circumstances. But even if we couldn’t have prevented their cynical attitudes on social issues — and consensus on climate change is one — maybe we can do better to anticipate them.

Agalloch

Agalloch is a synonym of agarwood. In parallel, Aquilaria agallocha and Agalochum malaccense are synonyms of Aquilaria malaccensis, the accepted scientific name of a tree that produces much of the world’s stock of this wood. When the heartwood (or duramen) of an Aquilaria tree is in the grip of an infection of Phaeoacremonium parasitica, the tree secretes a resin to beat the fungus off. The resin is very fragrant; depending on the duration of secretion, the heartwood can become saturated with it, at which point it becomes the very odoriferous agarwood. For centuries people have extracted this agarwood for use in perfumes and incense. We have also found the oils extracted from the wood, especially using steam distillation of late, are chemically very complex, including more than 70 terpenoids and more than 150 compounds overall.

This is a fascinating tale for the origin of something beautiful in nature, prompted by a tree’s desperate bid to fight off the advance of a fungal menace. Of course the human beholds this beauty, not the tree and certainly not the fungus — and Aquilaria malaccensis‘s wondrous resin hasn’t been able to keep humans at bay. The tree is listed as being ‘critically endangered’ on the IUCN Red List thanks to habitat loss and improper management of the global demand for the resinous agalloch.

‘Animals use physics’

What came first: physics or the world? It’s obviously the world, whereas physics (as a branch of science) offered ways to acquire information about the world and organise it. To not understand something in this paradigm, then, is to not understand the world in terms of physics. While this is straightforward, some narratives lead to confusion.

For example, consider the statement “animals use physics” (these animals exclude humans). Do they? Fundamentally, animals can’t use physics because their brains aren’t equipped to. They also don’t use physics because they’re only navigating the world, they’re not navigating physics and its impositions on the human perception of the world.

On July 10, Knowable published an article describing just such a scenario. The article actually uses both narratives — of humans using physics and animals using physics — and they’re often hard to pry apart, but sometimes the latter makes its presence felt. Example:

“Evolution has provided animals with movement skills adapted to the existing environment without any need for an instruction manual. But altering the environment to an animal’s benefit requires more sophisticated physics savvy. From ants and wasps to badgers and beavers, various animals have learned how to construct nests, shelters and other structures for protection from environmental threats.”

An illustration follows of a prairie dog burrow that accelerates the flow of wind and enhance ventilation; its caption reads: “Prairie dogs dig burrows with multiple entrances at different elevations, an architecture that relies on the laws of physics to create airflow through the chamber and provide proper ventilation.”

Their architecture doesn’t rely on the laws of physics. It’s that we’ve physics-fied the prairie dogs’ empirical senses and lessons they learnt in their communities to see physics in the world when in fact it’s not there. Instead, what’s there is evidence of the prairie dogs ability to build these tunnels and exploit certain facts of nature, knowledge of which they’re acquired with experience.

The rest of the article is actually pretty good, exploring animal behaviour that “depends in some way on the restrictions imposed, and opportunities permitted, by physics”. Also, what’s the harm, you ask, in saying “animals use physics”? I’ve no idea. But rather than as they could be, I think it should matter to describe things as they are.

Clocks on the cusp of a nuclear age

You need three things to build a clock: an energy source, a resonator, and a counter. In an analog wrist watch, for example, a small battery is the energy source that sends a small electric signal to a quartz crystal, which, in response, oscillates at a specific frequency (piezoelectric effect). If the amount of energy in each signal is enough to cause the crystal to oscillate at its resonant frequency, the crystal becomes the resonator. The counter tracks the crystal’s oscillation and converts it to seconds using predetermined rules.

Notice how the clock’s proper function depends on the relationship between the battery and the quartz crystal and the crystal’s response. The signals from the battery have to have the right amount of energy to excite the crystal to its resonant frequency and the crystal’s oscillation in response has to happen at a fixed frequency as long as it receives those signals. To make better clocks, physicists have been able to fine-tune these two parameters to an extreme degree.

Today, as a result, we have clocks that don’t lose more than one second of time every 30 billion years. These are the optical atomic clocks: the energy source is a laser, the resonator is an atom, and the counter is a particle detector.

An atomic clock’s identity depends on its resonator. For example, many of the world’s countries use caesium atomic clocks to define their respective national “frequency standards”. (One such clock at the National Physical Laboratory in New Delhi maintains Indian Standard Time.) A laser imparts a precise amount of energy to excite a caesium-133 atom to a particular higher energy state. The atom soon after drops from this state to its lower ground state by emitting light of frequency exactly 9,192,631,770 Hz. When a particle detector receives this light and counts out 9,192,631,770 waves, it will report one second has passed.

Caesium atomic clocks are highly stable, losing no more than a second in 20 million years. In fact, scientists used to define a second in terms of the time Earth took to orbit the Sun once; they switched to the caesium atomic clock because “it was more stable than Earth’s orbit” (source).

But there is also room for improvement. The higher the frequency of the emitted radiation, the more stable an atomic clock will be. The emission of a caesium atomic clock has a frequency of 9.19 GHz whereas that in a strontium clock is 429.22 THz and in a ytterbium-ion clock is 642.12 THz — in both cases five orders of magnitude higher. (9.19 GHz is in the microwave frequency range whereas the other two are in the optical range, thus the name “optical” atomic clock.)

Optical atomic clocks also have a narrower linewidth, which is the range of frequencies that can prompt the atom to jump to the higher energy level: the narrower the linewidth, the more precisely the jump can be orchestrated. So physicists today are trying to build and perfect the next generation of atomic clocks with these resonators. Some researchers have said they could replace the caesium frequency standard later this decade.

But yet other physicists have also already developed an idea to build the subsequent generation of clocks, which are expected to be at least 10-times more accurate than optical atomic clocks. Enter: the nuclear clock.

When an atom, like that of caesium, jumps between two energy states, the particles gaining and losing the energy are the atom’s electrons. These electrons are arranged in energy shells surrounding the nucleus and interact with the external environment. For a September 2020 article in The Wire Science, IISER Pune associate professor and a member of a team building India’s first strontium atomic clock Umakant Rapol said the resonator needs to be “immune to stray magnetic fields, electric fields, the temperature of the background, etc.” Optical atomic clocks achieve this by, say, isolating the resonator atoms within oscillating electric fields. A nuclear clock offers to get rid of this problem by using an atom’s nucleus as the resonator instead.

Unlike electrons, the nucleus of an atom is safely ensconced further in, where it is also quite small, making up only around 0.01% of the atom’s volume. The trick here is to find an atomic nucleus that’s stable and whose resonant frequency is accessible with a laser.

In 1976, physicists studying the decay of uranium-233 nuclei reported some properties of the thorium-229 nucleus, including estimating that the lowest higher-energy level to which it could jump required less than 100 eV of energy. Another study in 1990 estimated the requirement to be under 10 eV. In 1994, two physicists estimated it to be around 3.5 eV. The higher energy state of a nucleus is called its isomer and is denoted with the suffix ‘m’. For example, the isomer of the thorium-229 nucleus is denoted thorium-229m.

After a 2005 study further refined the energy requirement to 5.5 eV, a 2007 study provided a major breakthrough. With help from state-of-the-art instruments at NASA, researchers in the US worked out the thorium-229 to thorium-229m jump required 7.6 eV. This was significant. Energy is related to frequency by the Planck equation: E = hf, where h is Planck’s constant. To deliver 3.5 eV of energy, then, a laser would have to operate in the optical or near-ultraviolet range. But if the demand was 7.6 eV, the laser would have to operate in the vacuum ultraviolet range.

Further refinement by more researchers followed but they were limited by one issue: since they still didn’t have a sufficiently precise value of the isomeric energy, they couldn’t use lasers to excite the thorium-229 nucleus and find out. Instead, they examined thorium-229m nuclei formed by the decay of other elements. So when on April 29 this year a team of researchers from Germany and Austria finally reported using a laser to excite thorium-229 nuclei to the thorium-229m state, their findings sent frissons of excitement through the community of clock-makers.

The researchers’ setup had two parts. In the first, they drew inspiration from an idea a different group had proposed in 2010: to study thorium-229 by placing these atoms inside a larger crystal. The European group grew two calcium fluoride (CaF2) crystals in the lab doped heavily with thorium-229 atoms, with different doping concentrations. In a study published a year earlier, different researchers had reported observing for the first time thorium-229m decaying back to its ground state while within calcium fluoride and magnesium fluoride (MgF2) crystals. Ahead of the test, the European team cooled the crystals to under -93º C in a vacuum.

In the second part, the researchers built a laser with output in the vacuum ultraviolet range, corresponding to a wavelength of around 148 nm, for which off-the-shelf options don’t exist at the moment. They achieved the output instead by remixing the outputs of multiple lasers.

The researchers conducted 20 experiments: in each one, they increased the laser’s wavelength from 148.2 nm to 150.3 nm in 50 equally spaced steps. They also maintained a control crystal doped with thorium-232 atoms. Based on these attempts, they reported their laser elicited a distinct emission from the two test crystals when the laser’s wavelength was 148.3821 nm. The same wavelength when aimed at the CaF2 crystal doped with thorium-232 didn’t elicit an emission. This in turn implied an isomeric transition energy requirement of 8.35574 eV. The researchers also worked out based on these details that a thorium-229m nucleus would have a half-life of around 29 minutes in vacuum — meaning it is quite stable.

Physicists finally had their long-sought prize: the information required to build a nuclear clock by taking advantage of the thorium-229m isomer. In this setup, then, the energy source could be a laser of wavelength 148.3821 nm; the resonator could be thorium-229 atoms; and the counter could look out for emissions of frequency 2,020 THz (plugging 8.355 eV into the Planck equation).

Other researchers have already started building on this work as part of the necessary refinement process and have generated useful insights as well. For example, on July 2, University of California, Los Angeles, researchers reported the results of a similar experiment using lithium strontium hexafluoroaluminate (LiSrAlF6) crystals, including a more precise estimate of the isomeric energy gap: 8.355733 eV.

About a week earlier, on June 26, a team from Austria, Germany, and the US reported using a frequency comb to link the frequency of emissions from thorium-229 nuclei to that from a strontium resonator in an optical atomic clock at the University of Colorado. A frequency comb is a laser whose output is in multiple, evenly spaced frequencies. It works like a gear that translates the higher frequency output of a laser to a lower frequency, just like the lasers in a nuclear and an optical atomic clock. Linking the clocks up in this way allows physicists to understand properties of the thorium clock in terms of the better-understood properties of the strontium clock.

Atomic clocks moving into the era of nuclear resonators isn’t just one more step up on the Himalayan mountain of precision timekeeping. Because nuclear clocks depend on how well we’re able to exploit the properties of atomic nuclei, they also create a powerful incentive and valuable opportunities to probe nuclear properties.

In a 2006 paper, a physicist named VV Flambaum suggested that if the values of the fine structure constant and/or the strong interaction parameter change even a little, their effects on the thorium-229 isomeric transition would be very pronounced. The fine structure constant is a fundamental constant that specifies the strength of the electromagnetic force between charged particles. The strong interaction parameter specifies this vis-à-vis the strong nuclear force, the strongest force in nature and the thing that holds protons and neutrons together in a nucleus.

Probing the ‘stability’ of these numbers in this way opens the door to new kinds of experiments to answer open questions in particle physics — helped along by physicists’ pursuit of a new nuclear frequency standard.

Featured image: A view of an ytterbium atomic clock at the US NIST, October 16, 2014. Credit: N. Phillips/NIST.

Buildings affect winds

A 2022 trip to Dubai made me wonder how much research there was on the effects cities, especially those that are rapidly urbanising as well as are building taller, wider structures more closely packed together, had on the winds that passed through them. I found only a few studies then. One said the world’s average wind speed had been increasing since 2010, but its analysis was concerned with the output of wind turbines, not the consequences within urban settlements. Another had considered reducing wind speed within cities as a result of the Venturi effect by planting more trees. I also found a The New York Times article from 1983 about taller skyscrapers directing high winds downwards, to the streets. That was largely it. Maybe I didn’t look hard enough.

On June 11, researchers in China published a paper in the Journal of Advances in Modelling Earth Systems in which they reported findings based on a wind speed model they’d built for Shanghai city. According to the paper, Shanghai’s built-up area could slow wind speed by as much as 50%. However, they added, the urban heat-island effect could enhance “the turbulent exchange in the vertical direction of the urban area, and the upper atmospheric momentum is transported down to the surface, increasing the urban surface wind speed”. If the heat-island effect was sufficiently pronounced, then, the wind may not slow at all. I imagine the finding will be useful for people considering the ability of winds to transport pollutants to and disperse them in different areas. I’m also interested in what the model shows for Delhi (which can be hotter), Mumbai (wetter), and Chennai (fewer tall buildings). The relationship between heat-islands and the wind energy is also curious because city parts that are windier are also less warm.

But overall, even if the population density within skyscrapers may be lower than in non-skycraping buildings and tenements, allowing them to built closer together, as is normal in cities like Dubai, where these buildings are almost all located in a “business district” or a “financial district”, could also make it harder for the wind to ventilate these spaces.