Of reason and realism

Laurie Penny writes on Longreads:

Remember the U.S. presidential debates of 2016? Remember how the entire liberal establishment thought Hillary Clinton had won, mainly because she made actual points, rather than shambling around the stage shouting about Muslims? What’s the one line from those debates that everyone remembers now? It’s “Nasty Woman.” What’s the visual? It’s Trump literally skulking around Hillary, dominating her with his body. It’s theatre. And right now the bad actors are winning.

This paragraph is on point. Many left-liberal intellectuals frequently pen opinions, editorials and commentaries for the popular press and assume, by the self-assessed weight of their arguments, that the conservative, right-wing reader must be convinced of the superiority of the authors’ philosophies and switch sides. This never happens. Specifically, it doesn’t happen 90% of the time because the authors aren’t good writers, and the ensuing back-and-forth swiftly descends into semantics. And it doesn’t happen 10% of the time because the bhakt reading the article isn’t there for the points. You can write and write and write but – as Penny proves – the theatre of fascism will always overtake the finest discussion of ideas.

I’m neither a scientist nor a philosopher, but I have often wondered if ideas from scientific realism can help make sense of the empirical information we have. It is possible the liberal intellectual assumes her audience will behave, at the individual level at least, the way she herself does; this is a reasonable assumption that we all make in our day-to-day lives: for example, we excuse a friend’s anger in a moment of frustration because we rationalise it away based on lessons we have learnt from our own experiences. Similarly, the author presumes that, since she believes she can be swayed by reason, the reader will be swayed, too, and the author’s commitment to reason becomes – in the author’s mind, at least – a common platform upon which writer and reader will stage their debate. However, the flaw in this worldview is that the bhakt is, almost by definition, inimical to reason (irrespective of whether he is in all aspects of his life unreasonable) and does not mount the stage with the same aspirations.

Now, scientific realism (in its semantic interpretation) holds that science’s claims about scientific entities – “objects, events, processes, properties and relations” (source) – should be taken literally, as if they correspond to the actual natural universe itself instead of to a natural universe we perceive with our senses. The significance of this statement is better illustrated by a counter-example: anti-realists would contend that if we cannot see electrons with the naked eye, then science’s claims about their existence don’t pertain to their actual existence but instead provide ways to instrumentalise the claims to aid our interactions with observable entities, such as an electric fan.

Similarly, the liberal intellectual behaves like an anti-realist, seeking to explain deviant social phenomena in terms she can understand and rejecting what she cannot observe herself instead of, and like a realist might, allowing ideas that don’t conform to her worldview to exist on their own terms, outside the realm of her scholarship and trivialised because their rules don’t submit to the logic of hers.

Acknowledgement as in the latter case is important to enable meaningful engagement, such as it is willing to look beyond the identity and aspirations of one’s own group. More importantly, classifying what is beyond one’s didactic reach as fictions – even useful fictions, as the committed anti-realist might – is flawed the same way scientism prizes an economic logic at the cost of morals and ethics. The belief that there may be other ways to make informed choices but that they will ultimately have to be subsumed within one’s worldview prevents oneself from a) designing appropriate policies to govern them; b) expanding one’s own library of knowledge to include what could well be a legitimate alternative, and c) acknowledging the strength of the alternative on its terms instead of addressing it as a primitive form of one’s own politics.

So unable to see beyond her own allegiance to reason, the scholar assumes constantly that it can and will triumph, while her diminished sense of the external world prevents her from acknowledging a different set of motivations for people on the ‘other side’. Over time, the left-liberal collective begins to reject and ignore their existence altogether, dismissing their motivations and sensibilities with counterparts that the individual rooted in the primacy of logic, reason and civility can actually assimilate. This way, the left-liberal group keeps up its mindless performance of engaging with the right when in fact it is not engaging at all.

I don’t present all of this as criticism, however, because the primary function of an intellectual creature is to intellectualise, in whatever form: through speech, essays, dramatisation, etc. The act of intellectualisation, in turn, presumes that one’s interlocutor is capable of receiving knowledge so organised and assimilating it themselves. Without this caveat, intellectualism becomes solipsistic and free speech, insofar as it seeks opportunities to change minds and set society on the path of enlightenment, becomes purposeless. So while there are people who are willing to reason and debate and argue, they must do so; but where people resort to whataboutery, shooting-the-messenger and ad hominem, reason alone – if at all – will not hope to succeed.


To explain the world

Simplicity is a deceptively simple thing. Recently, a scientist who was trying to explain something in general relativity to me did so in the following way:

One simple way to understand … is as follows. Imagine that one sets up spherical polar coordinates, so that space is described by r, theta, phi and time is described by t. Then in this frame what one would normally call a non-rotating observer is one who has no angular velocity in theta and phi i.e. if the proper time of the observer is tau, then {d theta over d tau} = {d phi over d tau} = 0.

(Emphasis added)

This is anything but simple, and this problem isn’t limited to this scientist alone. Lots of them regularly conflate explanation with elaboration. More recently, another scientist – by way of describing a peer’s achievements – simply listed them in chronological order. It was the perfect example of ‘tell, don’t show’:

Starting with the discovery of strangeness, called Gell-Mann-Nishijima formula, the Eightfold Way of SU(3), current algebra, he finally reached the theory of strong interactions, namely quantum chromodynamics. So his name is there in all the components of the theory of strong interactions, now a part of Standard Model. His other fundamental contributions are in renormalisation group, an important part of quantum field theory and in the V-A form of weak interaction. He also proposed a mechanism by which neutrinos acquire very small masses, the so called the See-Saw mechanism. He had broad interests going beyond his contributions in theoretical physics.

Explanation requires the explainer to speak multiple languages. For example, explaining the event horizon to someone in class X means being able to translate what you know in the language of graduate-level physics to the language of Newtonian mechanics, first principles of optics, simple geometric shapes and recourse through carefully chosen metaphors. It means enabling the listener to synthesise knowledge in other contexts based on what you have said. But not doing any of this, sticking to just one language and using more and more words from that language cannot be an act of explanation, or even simplification, unless your interlocutor also speaks that language fluently.

Ultimately, it seems that while not all scientists can also be good science writers, there is a part of the writing process on display here that precedes the writing itself, and which is less difficult to execute: the way you think. To be able to teach well and explain well, I think one needs to be able to think in ways that will mitigate epistemological disparities between two people such that the person with more knowledge empowers the one with less to climb up the knowledge ladder.

This in turn requires one to examine the precise differences between why you know what you know and why your audience doesn’t know what you know. This is not the same as “the difference between what you know and what the audience knows” because it is then simply an exercise in comparison – an exercise in preserving the status quo even. Instead, to know the why of the difference is also to know how the difference can be bridged – resulting in an exercising in eliminating disparity.


NYT on fire

As the world burns, is anyone paying attention to the New York Times? Because if you’re not, you should: it’s catching fire as well. On May 23, the grand old newspaper published a report by Maggie Haberman about how former Trump aide Hope Hicks has an “existential” crisis over complying with a congressional subpoena. Granted, it’s been full of embers for a while now – as Jay Rosen has been saying for years – but this particular story bares the Times‘s ridiculous position vis-à-vis the Trump White House for all to see.

The first giveaway that something is rotten isn’t in the lede but in the hero image, a glamorous photograph of Hicks as if the words to come were going to discuss her clothes. The words that do come then paint Hicks as an enigmatic ex-administrator caught between a rock and a hard place when in fact the matter is far simpler:  either comply with the subpoena from the House Judiciary Committee or find a legitimate reason to skip it, like (it appears) former WH counsel Donald McGahn II has been able. It’s not existentialism; it’s potentially criminal obstruction of justice.

To quote from Rosen’s analysis above:

[Times journalists] want the support, they also want to declare independence from their strongest supporters. … They are tempted to look right and see one kind of danger, then look left to spot another, equal and opposite. They want to push off from both sides to clear a space from which truth can be told. That would make things simpler, but of course things are not that simple. The threat to truth-telling – to journalism, democracy, the Times itself – is not symmetrical. They know this. But the temptation lives.


Science in the face of uncertainty

In 2018, scientists from IISc announced they’d found a room-temperature superconductor, an exotic material that has zero resistance to electric current in ambient conditions – considered the holy grail of materials science. But in the little data the authors were willing to share with the world, something seemed off.

Within a few days, other scientists in India and around the world began to spot anomalous data points in the preprint paper. If the paper wasn’t already vague, it was now also very suspicious. And it was still hard to tell what was going on: the scientists weren’t speaking to the press, IISc kept mum and the narrative was starting to turn smelly.

The duo clearly had to walk a fine line if they wanted their claim, and themselves, to retain legitimacy. They were refusing to talk to the press until their paper had been peer-reviewed, they said. However, others said this was a weak excuse and it was easy to see why: the best way to clear up confusion is to open up, not clam up. But they refused to, as much as they refused to provide any more information about their experiment or to allow academics around India to join in. And the narrative itself had by then become noticeably befouled by suspicion that there was foul play 😱.

In a new effort to beat these dark clouds back, the duo updated their preprint paper on May 22 with a lot more data, apart from tacking on eight more collaborators to their team. (One of them was Arindam Ghosh, a particularly accomplished physicist at IISc.) This was heartening to find out, esp. that they’re receptive to feedback. In fact, they’d also made note of that anomalous data pattern (although they still aren’t able to explain how it got there).

Making the GIANT ASSUMPTION that their claim is eventually confirmed and we have a room-temperature superconductor in our midst, a lot of things about many technologies will change drastically. Theorists will also have a new line of enquiry – though some already do – to find out which materials can be superconductors under what conditions. If we figure this question out, discovering new superconducting materials will become that much easier.

IFF the claim ends up being confirmed, many people will also likely have many different takeaways from what will become encoded as an extended historical moment, the prelude to a major discovery (or invention?). At that time, I think it will be interesting to look back and consider how different scientists respond to something very new in their midst.

To adopt Thomas Kuhn’s philosophy of scientific progress, it will be interesting to examine individual attitudes to paradigm-shifts, and the different  extents to which skepticism and cynicism dominate the story when the doctrine of incommensurability is in play. After all, a scientific result that has researchers scrambling for an explanation can evoke two kinds of responses, excitement or distrust, and it would be useful to find out if they’re context-specific in a contemporary, Indian setting.

In fact, the addition of Arindam Ghosh to the IISc research team reminds me of a specific incident from the not-so-distant past (and I do NOT suggest Ghosh was included only for scholastic heft). In 1982, Dan Shechtman discovered quasicrystals, whose internal crystal arrangement defied the prevailing wisdom of the time. So Shechtman was ridiculed as a “quasi-scientist” by a person no less in stature than Linus Carl Pauling, the father of molecular biology.

But Shechtman was sure of what he had seen under the microscope, so he attempted a third time to have his claim published by a journal. This time, he improved the manuscript’s presentation, and invited Ilan Blech, John Cahn and Denis Gratias to join his team. The last two lent much weight to an application that the casual historian of science frequently considers to be an objective and emotionless enterprise! Their paper was finally accepted by Physical Review Letters in November 1984.

Also in the early 1980s, Dov Levine in the US had discovered quasicrystals but without knowing that Shechtman had done the same thing, and Levine was eager to publish his paper. But Paul Steinhardt, his PhD advisor, advised caution because he didn’t want Levine to be proven wrong and his career damaged for it. Wise words – but also interesting words that show science is nothing without the people that practice it, that there’s a lot to it beyond the stony face of immutable facts, etc.

This is something many people tend to forget in favour of uttering pithy statements like “science is objective”, “science is self-correcting”, etc. Scientism frequently goes overboard in a bad way, and the arc of scientific justice doesn’t bend naturally towards truths. It has to be pulled down by the people who practice it. Science is MESSY – like pretty much everything else.

The same applies in the IISc superconductivity claim case as well. Nobody can respond perfectly in the face of great uncertainty; we can all just hope to do our best. Some ways for non-experts to navigate this would be to a) talk to scientists; I know some who’d surprise you with their willingness to sit down and explain; b) pick out publications you trust and read them (that’s The Wire Science 😄 and The Hindu Science in this specific case) as well as try to discover others; and c) be nice and don’t jump to conclusions, esp. within a wider social frame in which self-victimisation and entitlement has often come too easily.

Also, three cheers for preprints!

I turned this post into a Twitter thread on May 26, 2019.


The wind and the wall

I have an undergraduate degree in mechanical engineering but I’ve always struggled with thermodynamics. To the uninitiated, this means most of the knowledge specific to mechanical engineering over other branches remains out of my reach. I would struggle even with the simpler concepts, and perhaps one of the simplest among them was pressure.

When a fluid flows through a channel, like water flowing through a pipe, it’s easy to intuit as well as visualise what would happen if it were flowing really fast. For example, you just get that when water flowing like that turns a corner, there’s going to be turbulence at the elbow. In technical parlance, it’s because of the inertia of motion (among other things, perhaps). But I’ve never been able to think like this about pressure, and believed for a long time that the pressure of a fluid should be higher the faster it is flowing.

In my second or third year of college, there was a subject called power-plant engineering, a particularly nasty thing made so because it was essentially the physics of water in different forms flowing through a heat-exchanger, a condenser, a compressor, a turbine, etc. Each of these devices mollified the fluid to perform different services, each of them a step in the arduous process of using coal to generate electricity.

Somewhere in this maze, a volume of steam has to shoot through a pipe. And I would always think – when picturing the scene – that the fluid pressure has to be high because its constituent particles are moving really fast, exerting a lot of force on their surroundings, which in turn would be interpreted as their pressure, right?

It was only two years later, and seven years ago, that I learnt my mistake, when my folks moved to an apartment complex in Bangalore. This building stands adjacent to a much larger one on its right, separated by a distance of about 40 feet, with a wall that rises as high as an apartment on the sixth floor. My folks’ house is on the fourth floor. Effectively, the complex and the wall sandwich a 40-foot-wide, 80-foot-high and 500-foot-long corridor. The whole setup can be surveyed from my folks’ house’s balcony.

When there’s a storm and the wind blows fast, it blows even faster through this corridor because it’s an unobstructed space through which the moving air can build momentum for longer and because its geometry prevents the air from dissipating too much. As a result, the corridor becomes a high-energy wind tunnel, with the wind whistling-roaring through on thunderous nights. When this happens, the curtains against the window on the balcony always billow outwards, not inwards.

This is how I first realised that the pressure outside, in the windy corridor, is lower than it is inside the house. The technical explanation is (deceptively) simple: it’s composed of the Bernoulli principle and the Venturi effect.

The moving wind has some energy that’s the sum of the kinetic energy and the potential energy. The wind’s speed depends on its kinetic energy and its pressure, on its potential energy. Because the total energy is always conserved, an increase in kinetic energy can only be at the expense of the potential energy, and vice versa. This implies that if the wind’s velocity increases, then the corresponding increasing in kinetic energy will subtract from the potential energy, which in turn will reduce the pressure. So much is the Bernoulli principle.

But why does the wind’s velocity increase at all in the corridor? This is the work of the Venturi effect. When a fluid flowing through a wider channel enters a narrower portion, it speeds up. This is because of an elementary accounting principle: the rate at which mass enters a system is equal to the rate at which mass accumulates in the system plus the rate at which it exits the system.

In our case, this system is composed of the area in front of the apartment complex, which is very wide and wherefrom the wind enters the narrower corridor, the other part of the system. Because  the amount of wind exiting the corridor at the other end must equal the rate at which it’s entering the corridor, it speeds up.

So when the wind starts blowing, the Venturi effect accelerates it through the corridor, the Bernoulli principle causes its pressure to drop, and that in turn pulls the curtains out of my window. If only I’d seen this in my college days, that D might just have been a C. Eh.


A century of the proton

In 1907, a New Zealander named Ernest Rutherford moved from McGill University in Canada to the University of Manchester. There, he conducted a series of experiments where he fired alpha particles1at different materials. When he found that the beams deviated by about 2º when fired through air, he figured that the atomic constituents of air would have to have electric fields as strong as 100 million volts per cm to explain the effect. Over the next decade, Rutherford – together with the help of Hans Geiger and Ernest Marsden – would conduct more experiments that ultimately resulted in two very important results in the history of physics. First, that the atom was not indivisible. Second, the discovery of the proton.

In the last year of the 19th century and the first year of the 20th, Rutherford, and Paul Villard, had independently isolated and classified radiation into three types: alpha, beta and gamma. Their deeper constituents (as we know them today) weren’t known until much later, and Rutherford played an important role in establishing what they were. By 1911, he had determined that the atomic had a nucleus that occupied 0.1% of the total volume but contained all the positive charge – known today as the famous Rutherford model of the atom. In 1914, he returned to Canada and then Australia on a lecture tour, and didn’t return to the UK until 1915, after the start of World War I. Wartime activities would delay his studies for two more years, and he could devote his attention to the atom once more only in 1917.

That year, he found that when he bombarded different materials with alpha particles, certain long-range recoil particles called “H-particles” (a term coined by Marsden in 1913) were produced, more so when nitrogen gas was also present. This finding led him to conclude that an alpha particle could have penetrated the nucleus of a nitrogen atom and knocked out a hydrogen nucleus, in turn supporting the view that the nuclei of larger atoms also included hydrogen nuclei. The hydrogen nucleus is nothing but the proton. Rutherford couldn’t publish his papers on this finding until 1919, after the war had ended. He would go on to coin the term “proton” in 1920.

Interestingly, in 1901, Rutherford had participated in a debate, speaking in favour of the possibility that the atom was made up of smaller things, a controversial subject at the time. (His ‘opponent’ was Frederick Soddy, the chemist who proved the existence of isotopes.) It is highly unlikely that he could have anticipated that, only three or so decades later, people would begin to suspect that the proton itself was made up of smaller particles.

By the early 1960s, studies of cosmic rays and their interactions with matter indicated that the universe was made of much more than just the basic subatomic pieces. In fact, there was such a profuse number of particles that the idea that there could be a hitherto unknown organisational principle consisting of fewer smaller particles was tempting, albeit only to a few.

In 1964, Murray Gell-Mann and George Zweig independently proposed such a system, claiming that many of the particles could in fact be composed of smaller entities called quarks. By 1965, and with the help of Sheldon Glashow and James Bjorken, the quark model could explain the existence of a variety of particles as well as some other physical phenomena, strengthening their case.

Then, in a series of seminal experiments that began in the late 1960s, scientists at the Stanford Linear Accelerator Center began to do what Rutherford had done half a century prior: smash a smaller particle into a larger one with enough energy for the latter to reveal its secrets. Specifically, physicists used the linear accelerator at the SLAC to energise electrons to about 21-times the energy contained by a proton at rest, and smash them into protons. The results were particularly surprising.

A popular way to study particles, then as well as now, has been to beam a smaller particle at a larger one and scrutinise the collision for information about the larger particle. In this setup, physicists expect that greater the energy of the probing particle, the greater the resolution at which the larger particle will be probed. However, this relationship fails with protons because of scaling: electrons at higher and higher energies don’t reveal more and more about the proton. This is because, at energies beyond a certain threshold, the proton begins to resemble a collection of three point-like entities, and the electron’s interaction with the proton is reduced to its interactions with these entities, independent of its energy.

The SLAC experiments thus revealed that the proton was indeed made up of smaller entities called quarks, of two types – or flavours – called up and down. Gell-Mann and Zweig had proposed the existence of updown and strange quarks, and Glashow and Bjorken of the charm quark. By the 1970s, other physicists had proposed the existence of bottom and top quarks, discovered in 1977 and 1995, respectively. With that, the quark model was complete. More importantly for our story, it also made a complete mess of the proton – literally.

In the 1970s, physicists began to smash protons with neutrinos and antineutrinos to elicit information about the angular distribution of quarks inside particles like protons. They found that a proton in fact contained three free quarks in a veritable lake of quark-antiquark pairs, as well as that the sum of all their momenta didn’t add up to the total momentum of a proton. This hinted at the presence of another then-unknown particle that they called the gluon (which is its own mess).

In that decade, particle physicists began to build the theoretical framework called quantum chromodynamics (QCD), to explain the lives and workings of the six quarks, six antiquarks and eight gluons – all particles governed by the strong nuclear force.

Ninety years after Rutherford announced the discovery of the proton by shooting alpha particles through slices of mica and columns of air, scientists switched the world’s largest physics experiment – the Large Hadron Collider – on to study the fundamental constituents of reality by smashing protons into other protons. Using it, they have proved that the Higgs boson is real as well as have studied intricate processes with insights into the very-early universe and have pursued answers to questions that continue to baffle physicists.

Through all this, scientists have endeavoured to improve our understanding of QCD, especially by studying how quarks, antiquarks and gluons interact during a collision, knowledge that is crucial to ascertain the existence of new particles and deepen our understanding of the subatomic world.

Physicists have also been using collider experiments to examine the properties of exotic forms of matter, such as colour glass condensatesglasma and quark-gluon plasma, narrow the search for proposed particles to explain some basal discrepancies in the Standard Model of particle physics, make precision measurements of the proton’s properties for its implications for other particles (such as this and this) and explore unsolved problems concerning the proton (like the spin crisis).

And fully – rather only – 100 years after the proton was first sussed out, particle physics itself looks very different from the way it did in Rutherford’s time, and a large part of the transformation can be attributed, one way or another, to the proton. Today, physicists pursue other, very different particles, dream of building even larger proton-smashing machines and are busy knitting together theories that describe a world much smaller than the one of quarks and gluons. It’s a different world of different mysteries, as it should be, but it’s also great that there are mysteries at all.

1An alpha particle is actually a clump of two protons and two neutrons – i.e. the nucleus of the helium-4 atom.

Featured image credit: Kjerish/Wikimedia Commons, CC BY-SA 4.0.


The Nehru-Gandhis’ old clothes

The following tweet has been doing the rounds the last few days:

It carries an important message from India’s recent past, that a time of free-as-in-free speech actually did exist only half a century ago. It stands in stark contrast to the public political clime today, where people are jailed for sharing harmless memes and journalists gagged for doing their jobs, not to mention scholars being disinvited from lectures, musicians being prevented from singing and universities becoming less plural and more parochial.

However, Shankar’s cartoon, as depicted above, shouldn’t be paraded as a symbol of an era antithetical to this – 2014-2019 – alone but as one that doesn’t sit well with the politics of 21st century India altogether, including that of the Nehru-Gandhis. It is doubtful that whenever Rahul Gandhi comes to power, if at all he does, he is going to be okay with cartoons showing his great-grandfather’s clenched butt standing outside the doors of the UN, even if he might be willing to brook more dissent than the Bharatiya Janata Party has been.

The party that he leads with his mother has championed sycophancy and nepotism since the 1970s, when Indira Gandhi assumed power. This has often meant that those critical of their family – the First Family, so to speak – have never been able to climb the ranks and/or lead important institutions during Congress rule, even if they are otherwise qualified to do so. Perhaps the most stark example of this in recent memory was when Mridula Mukherjee assumed directorship of the Nehru Memorial Museum and Library in New Delhi after an opaque selection process, and proceeded to turn the institution into a building-sized panegyric for Sonia Gandhi et al.

Indeed, the same can be said for any political organisation that is held together by hero worship, centralisation of power and dynasticism. Some examples from around the country include the Dravida Munnetra Kazhagam, the All India Anna Dravida Munnetra Kazhagam, Shiv Sena, the Samajwadi Party and and the Rashtriya Janata Dal.

Today, Shankar’s illustration seems only to describe the extent to which the BJP has vitiated civil discourse and the need vote it out. However, the cartoon does not say anything about the party I would like to vote in because it says everything about what free speech really means, the kind of tolerance that political parties must harbour and, most of all, the fact that there seems to be nobody who is capable of that anymore. Even should the UPA somehow emerge triumphant on May 23, this cartoon will likely trigger as much wistfulness as it does today.


The worm and the world

Alanna Mitchell reports in the New York Times that boreal forests in the world’s north are being invaded by worms of the species Dendrobaena octaedra. They’re decomposing the leaf litter and releasing carbon dioxide into the atmosphere, transforming these carbon-negative forests into carbon-positives. In the process, they’re also disrupting climate models that scientists had prepared to understand how climate catastrophe might pan out in these areas. There’s no question that any of this is a disaster; it certainly is.

If I had written this story, I would have been very tempted to mention Nidhogg, the worm gnawing at one of the roots of Yggdrasil, the sacred tree of Norse mythology, to bring on the end of the world. This root is placed over a hot spring called Hvergelmir in Niflheim, a place of ice and cold, akin to the climate in Alberta and Alaska, where the worms have been found (the place of fire is called Muspelheim); Nidhogg lives within the spring. According to the Völuspá, which describes the creation myths of Old Norse mythology, Nidhogg’s arrival to the surface after breaking through Yggdrasil’s root signals Ragnarök, the Nordic apocalypse.

I’m sure others would have thought of this extended metaphor as well and probably decided against using it because it doesn’t add anything to the narrative and it doesn’t help make the story easier to digest anymore than it already is. Further, the addition – in India at least – would likely have drawn the ire of an entranced bhakt who can’t tell the difference between light-hearted allegory and full-blown prophecy, insisting that some Vedic text already knew of the worms before anyone else. It’s the sort of idiocy that easily beats the joy of curiosity, so it’s best kept to oneself, at least until the pall of gloom that many of us seem to be under passes.


The dance of the diamonds

You probably haven’t heard of the Chladni effect but you’ve likely seen it in action. Sprinkle some grains of sand on a thin metal plate and play a violin bow across it, and you’ll notice that the grains bounce around for a bit before settling down into a pattern, and refuse to budge after that.

This happens because of a phenomenon called a standing wave. When you drop a rock into a pond, it creates ripples on the surface. These are moving waves taking the rock’s kinetic energy away in concentric circles. A standing wave on the other hand (and like its name implies) is a wave that rises and falls in one place instead of moving around.

Such waves are formed when two waves moving in opposite directions bump into each other. For example, in the case of the metal plate, the violin bow sets off a sound wave that travels to the opposite edge of the plate, gets reflected and encounters a newer wave on the way back. When these two waves collide, they create nodes – points where their combined amplitude is lowest – and antinodes – pointes where their combined amplitude is highest.

In 1866, a German physicist named August Kundt designed an instrument, now called a Kundt’s tube, to demonstrate standing waves. A short demonstration below from user @starwalkingphoenix:

The tube is made of a transparent material and partially filled with a soft, grainy substance like talc. One end of the tube opens up to a source of sound of a single frequency while the other end is stewarded by a piston. As the piston moves, it can increase or decrease the total length of the tube. When the sound is switched on, the talc moves and settles down into the nodes. The piston is used to identify the resonant frequency: it is used to increase or decrease the tube’s length until the volume suddenly increases. That’s the sweet spot.

In the Chladni effect, the sand grains settle down into the nodes of the standing wave formed by the vibrations induced by the violin bow. These nodes are effectively the parts of the plate that are not moving, or are moving the least, even as the plate as a whole hosts vibrations. Here is a nice video showing different Chladni patterns; notice how they get more intricate at the higher frequencies:

The patterns and the effect are named for a German physicist and musician named Ernst Chladni, who experimented with them in 1787 and used what he learned to design violins that produced and emitted sound better. The English polymath Robert Hooke had performed the first such experiments with flour in the late 17th century. However, the patterns weren’t attributed to standing waves until the early 18th century by Sophie Germain, followed by Horace Lamb, Michael Faraday and John Strutt, a.k.a. Lord Raleigh. (The term ‘standing wave’ was itself coined only in 1860 by [yet] another German physicist named Franz Melde.)

Now, both Chladni and Faraday had separately noticed that while the patterns were formed most of the time, they did not when finer grains were used.

A group of scientists from a Finnish university recently rediscovered this bit of strangeness and piled some more weirdness on top of it. They immersed a square silicon plate 5 cm to a side in a tank of water and scattered small diamond beads (each 0.75 mm wide) on top. When they applied vibrations at a frequency of 9,575 Hz, the beads moved towards the parts of the plate that were vibrating the most instead of the least – i.e. towards the antinodes instead of the nodes.

A schematic illustration of the experimental setup. Source: PRL 122, 184301 (2019)

This doesn’t make sense – at least not at first, and until you stop to consider what you might be taking for granted. In the case of the metal plate, the sand grains are bounced around by the vibrations, and those that are thrown up do come back down due to gravity – unless they’re too light or the breeze is too strong, and they’re swept away.

Water is over 800-times denser than air and would exert a stronger drag force on the diamond beads, preventing them from being able to move around easily. Then there’s also the force due to the vibrations and gravity. But here’s the weird part. When the scientists combined the three forces into a common force, they found that it always pushed a bead towards the nearest antinode.

And this was just at the resonant frequency: the frequency at which an object is most amenable to vibrate given its physical properties. In other words, the resonant frequency is the frequency of the vibration that consumes the least amount of energy to cause in the body. For example, the silicon plate resonated at 9,575 Hz and 11,175 Hz.

But when the scientists applied vibrations at a non-resonant frequency of 10,675 Hz, the diamond beads moved around in swirling patterns that the scientists call “vortex-like”.

In 2016, another group of scientists – this one from France – had reported this swirling behaviour with polystyrene microbeads on a polysilicon membrane, both suspended in ultra-pure water. On that occasion, they had compared the beads’ paths to those of dancers performing a farandole, a community dance popular in Provence, France (see video below).

Polystyrene beads each 70 μm wide in a cavity rotating in a farandole-like manner at an applied frequency of 61,000 Hz. The time frame between each picture is 0.5 s. Source: PRL 116, 184501 (2016)

The scientists from the Finnish university were able to record over 96,000 data points and used them to try and figure if they could obtain an equation that would fit the data. The exercise was successful: they obtained one that could locate the “nodal, antinodal and vortical regions” on the silicon plate using two parameters (relatively) commonly used to model magnetic fields, called divergence and curl. Specifically, the divergence of the “displacement field” – the expected displacement of all beads from their initial position when a note is played for 500 milliseconds – denoted the nodal and antinodal regions and the curl denoted the parts where the diamonds would do the farandole.

However, to rephrase what they wrote in their paper, published in the journal Physical Review Letters on May 10, the scientists can’t explain the theory behind the patterns formed. Their equations are based only on experimental data.

The French group was able to advance some explanation rooted in theoretical knowledge for what was happening, although their experimental conditions were different from that of the Finnish group. Following their test, Gaël Vuillermet, Pierre-Yves Gires, Fabrice Casset and Cédric Poulain reasoned in their paper that an effect called acoustic streaming was at play.

It banks on the Navier-Stokes equations, a set of four equations that physicists use to model the flow of fluids. As Ronak Gupta recently explained in The Wire Science, these equations are linear in some contexts and nonlinear in others. When the membrane vibrates slowly, the linear form of these equations can be used to model the beads’ behaviour. This means a certain amount of change in the cause leads to a proportionate change in the effects. But when the membrane vibrates at a frequency like 61,000 Hz, only the nonlinear forms of the equation are applicable: a certain amount of change in the cause precipitates a disproportionate level of change in the effects.

The nonlinear Navier-Stokes equations are very difficult to solve or model. But in the case of acoustic streaming, scientists know that the result is for the particles to flow from the antinode to the node along the plate’s surface, then rise up and flow from the node to the antinode – in a particulate cycle, if you will.

Derek Stein, a physicist at Brown University in Rhode Island, wrote in an article accompanying the paper:

… this migration towards antinodes is a hallmark of particles being carried in acoustically generated fluid streams, and the authors were able to rule out alternative explanations. … [The] streaming effect in a liquid is only observable within a restricted window of experimental parameters. First, the buoyancy of the beads has to closely balance their weight. Second, the plate has to be sufficiently wide and thin that its resonant vibrations have large amplitudes and produce high vertical accelerations. The authors also noticed that tuning the driving frequency away from a resonance coaxed the particles to move in regular formations. This motion begged to be anthropomorphised, and the authors duly likened it to the farandole…

After this point, both research papers break off into discussing potential applications but that’s not why I am here. My favour part at this point is something the Finnish university group did: they built a small maze and guided a 750-μm-wide glass bead through it simply by vibrating its floor at different frequencies. They just had ensure that at some frequencies, the node/antinode would be to the left and at others, to the right.

Credit: K. Latifi et al., Phys. Rev. Lett. (2019)

And because they also possessed the techniques by which they could induce a particle to travel in straight lines or in curves, they could the move the beads around to trace letters of the alphabet!

Source: PRL 122, 184301 (2019)
Op-eds Science

Using ‘science’ appropriately


(Setting aside the use of the word ‘faith’) The work that some parts of CSIR has done and is doing is indeed very good. However, I feel we are not all properly attuned to the difference between the words “science” and “technology”. I don’t accuse Mande of ignorance but possibly the New Indian Express, the publisher. In a writer-publisher relationship, the latter usually determines the headlines.

Being more aware of what the words mean is important for us as mediapersons to use them in the right context, and this in turn is consequential because the improper overuse of one term can mask deficiencies in its actual implementation. For example, I would rather have used ‘Technology as saviour’ as the headline for Mande’s piece, and for various pieces in the Indian mainstream news space. But by using science, I fear these publications are giving the impression that Indian science is currently very healthy, effective and true to its potential for improving the human condition.

Quite to the contrary, funding for fundamental research has been dropping in India; translational support is limited to areas of study that can “save lives” and are in line with political goals; and the political perception of science is horribly skewed towards pseudoscience.

Before that one commentator jumps in to say things aren’t all that bad: I agree. There are some pockets of good work. I am personally excited about Indian researchers’ contributions to materials science, solid-state and condensed-matter physics, biochemistry, and experimental astronomy.

However, the fact remains that we are very far from things being as they should be, and not as political expediency needs them to be. And repeatedly using “science” when in fact we really mean “technology” could keep us form noticing that. That is, if we were mindful of the difference and used the words appropriately, I bet the word “science” would only occasionally appear on our timelines and news feeds.


The symmetry incarnations

This post was originally published on October 6, 2012. I recently rediscovered it and decided to republish it with a few updates.

Geometric symmetry in nature is often a sign of unperturbedness, as if nothing has interfered with a natural process and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful, not just by humans but also monkeys.

This is only one example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge and many other fields besides.


One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some spin, some all of them, and some none of them. This event is known as electroweak symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics. On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0; 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes the image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection. Interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there).

An illustration of V.S. Ramachandran's mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy
An illustration of V.S. Ramachandran’s mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the node richest in minerals on the stem, then it will. If a branch is given to sprout from the node richest in minerals on the trunk, then it will. So is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches, all follow the same pattern. This tendency to display an internal symmetry is characterised as fractalisation. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons
An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons

If you want to interact with a Mandelbrot set, check out this magnificent visualisation by Paul Neave (defunct now 🙁 ). You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. This set is one of a few exceptional sets that are geometric fractals.

Meta-geometry and Mulliken symbols

It seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the meta-geometry at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we are ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we are observing what the three-dimensional shape is.

A circle, thus, becomes a sphere. A square becomes a cube. A triangle becomes a tetrahedron. In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0
The three-dimensional cube generates the four-dimensional hypercube, a.k.a. a tesseract. Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities and this includes symmetrical entities as well. So if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.

Symbol Property
A symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
B anti-symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
E degenerate
subscript 1 symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript 2 anti-symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript g symmetric with respect to a center of symmetry
subscript u anti-symmetric with respect to a center of symmetry
prime (‘) symmetric with respect to a mirror plane horizontal to the principal rotational axis
double prime (”) anti-symmetric with respect to a mirror plane horizontal to the principal rotational axis

Source: LMU Munich

Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not coincidental. Rather, it is very well-defined.


Now, if nature desires symmetry, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be, why don’t we see symmetry everywhere? Why is natural symmetry broken? One answer to this is that it is broken only insofar as it attempts to preserves other symmetries that we cannot observe with the naked eye.

For example, symmetry in the natural order is exemplified by a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. In other terms, geological anastomosis allows materials to be displaced from one location and become deposited in another, offsetting large-scale symmetry in favour of the prosperity of microstructures.

More generally, anastomosis is defined as the splitting of a stream of anything only to reunify sometime later. It sounds simple but it is an exceedingly versatile phenomenon, if only because it happens in a variety of environments and for a variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator – i.e. successive difference between numbers – has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 …
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

The forest of stunted trees, used to gain more insights into Gilbreath's conjecture. Credit: David Eppstein/Wikimedia Commons
The forest of stunted trees, used to gain more insights into Gilbreath’s conjecture. Credit: David Eppstein/Wikimedia Commons

This pattern is called the forest of stunted trees, as if it were an area populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, parting ways only to come close together after every five lines.

Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0
Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0

Nambu-Goldstone bosons

While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Particle physics simplifies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle from a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

How and when an NG boson is introduced into a system is not yet well-understood. In fact, it was only recently that a theoretical physicist, named Haruki Watanabe, developed a mathematical model that could predict the number of degrees of freedom a complex system could have given the presence of a certain number of NG bosons. At the most fundamental level, it is understood that when symmetry breaks, an NG boson is born.

The asymmetry of symmetry

That is, when asymmetry is introduced in a system, so is a degree of freedom. This seems intuitive. But at the same time, you would think the reverse is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom. However, this isn’t always the case because it could violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement).

Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another. This is just like the split-off streams, but this time, they are unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is very weak, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., it is theoretically possible for symmetry to be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly. When the seditious droplet joins with another globule, that globule also quickly reestablishes its spherical shape.

Thermodynamically speaking, there is mass transfer but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area. Notice how this shape – the sphere – is incidentally also the one with the most axes of symmetry and the fewest redundant degrees of freedom? Manufacturing such spheres is very hard.

An omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (i.e., self-symmetry – and it happened again). Indeed, why would nature deviate from itself? And as it recreates itself with different resources, it lends itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location (except when quasiparticles show up). But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together an unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning.

Why should stars, music, light, radioactivity, politics, engineering or knowledge be any different?

Culture Science

A tale of two horrors: poop and aliens

I saw this tweet yesterday:

Information like this always reminds me of one fact that awakened me to the behind-the-scenes role that the natural universe plays in our cultural lives. The organic compounds called indole and skatole are what give human poop its unique and uniquely disgusting smell – an odour that our brains have evolved to be repelled by so that humans, by whatever accident of fate, don’t consume the damned thing.

However, indole and skatole are also what make jasmine flowers smell so wonderful. This happens because the two compounds are present in higher concentrations in faecal matter and in lower concentrations in jasmine. This is essentially Lovecraftian horror at its best: like in the tale of Arthur Jermyn and his family, our horrors are not horrific inasmuch as they inhabit us. They don’t harm or pollute us in any sense. It is the interpretation of that information, after realising it, that can be so utterly devastating.

It is a story of the familiar becoming unfamiliar, triggering a sense of our biological identity having deluded our cultural one. In the case of Arthur Jermyn, the man sets himself on fire after realising that one of his paternal ancestors mated with a great ape. In the case of indole and skatole, many are likely to be thrown off their affinity for jasmine flowers. But I prefer thinking about it backwards: I like jasmine all the more for what it is because it redeems the two compounds, freeing them from the poopiness for which only evolution is to blame, not themselves.

Aside 1: I wouldn’t be able to do the same thing with an Arthur-Jermyn-like discovery, however: it is vastly more innate and visceral, and as inescapable for it.

Aside 2: This is the sort of horror I also find in the work of H.R. Giger.

In the same vein, the caste system (the Hindu version of which I am most familiar with) taints its followers with pseudoscience simply because it supersedes the biochemical composition of faecal matter with the inexplicable, immoral and dehumanising pall of untouchability. A person can be a great particle physicist, for example, but the moment he believes there is an untouchable caste whose members are deigned to clean drains, he also disbelieves the cleansing and deodourising potential of antibiotic solutions and chemical disinfectants. He, in effect, has elevated poop into a socio-cultural kryptonite up from the mass of organic compounds that it actually is, and becomes both a scientist and a pseudoscientist at once. There are many such people in India, and they demonstrate what they believe science to be: a separate entity isolated from the rest of society.

To quote Anton Chekhov,

To a chemist nothing on earth is unclean. A writer must be as objective as a chemist, he must lay aside his personal subjective standpoint and must understand that muck heaps play a very respectable part in a landscape, and that the evil passions are as inherent in life as the good ones.

(Thanks to Madhusudhan Raman for pointing this one out to me.)

This must not be construed as an attempt to trivialise the importance of culture, however, as the backward-case of jasmine should demonstrate. It is simply an example to illustrate the weird and fascinating fact that while scientific knowledge that underlies a human phenomenon can inhabit a continuum of possibilities – such as the increasing or decreasing concentrations of indole and skatole – it is entirely possible for the overlying cultural substrate to undergo more drastic, and analog, transformations – such as from desirable to detestable.

Don’t read beyond this point if you’re yet to watch Love, Death & Robots.


Episode 7 of the Netflix series Love, Death & Robots – entitled ‘Beyond the Aquila Rift’ – captures this transformation very well, albeit in a very material way. When a waylaid spacefarer wakes up after many months in a repair station lightyears away from his original destination, he begins to suspect that something about his ‘reality’ is amiss. He realises that he is in a simulation being fed to his brain by a superior entity and demands that he be allowed to see what his actual surroundings look like.

He suddenly wakes up in a dilapidated place covered entirely in webbing, with no apparent signs of life nearby. Then, the alien presence that was maintaining him in suspended animation shows itself thus:

At first, it is as if a woman is about to emerge from the shadows…
… but the figure quickly turns out to be that of a ghastly monstrosity. In bits and pieces, it bears passing resemblance to parts of a human body but in total, not at all.

The episode’s directors (Léon Bérelle, Dominique Boidin, Rémi Kozyra and Maxime Luère) and animators (Unit Image) did very well to depict this transformation thus. The transition from lady to alien is scarred on my neural circuits, and if I look at it backwards, it only becomes more terrifying, as if it seems to ask: Will glimpses of the familiar suffice anymore?