After you find out that a male writer has been a lesser person than you thought he was, have you found it harder to read and appreciate his work?
I bet you have.
I’m sure it’s the case with female, rather non-male, writers too but most examples we know are those of men.
Now, I don’t read anywhere near as much as many people do and so haven’t been prompted to abandon as many writers as they might have – although I have stopped reading many blogs by scientists for the same reason (e.g. Tommaso Dorigo).
But I just finished reading this essay in the New Republic and now so many writers are written off, including Kurt Vonnegut, whose absurd fiction I’ve loved so much.
I remember when I first experienced what I’m feeling now, a sense of unsurprising surprise, as best as I can put it: in early 2014, Scientific American sacked Bora Zivkovic as the chief of its sprawling blogs network.
I also remember briefly being in quasi-mourning at the time, for having lost the ability, opportunity and whatever else to be able to consume Zivkovic’s work with pleasure.
The sorrow was proxy for a lingering tension in my mind, one that yearned for a distinction between a creator and his creation, and to prove that a tainted creator could produce untainted work.
This sorrow has only deepened since August or so last year, when wave after wave of #MeToo allegations rocked the Indian literary and journalistic scene.
But what has also deepened is the sense that this is the history of men in modernity, and where men have gone, their poisonous masculinity and patriarchy has gone as well.
That the sadness I felt with Zivkovic and Dorigo and Vonnegut is misplaced because it is rooted in false histories, in stories fabricated by the forces of men.
That the good memories in whose shadows these tragedies purported to thrive didn’t exist either. It was always dark, and the darkness had been impregnated with delusions of blamelessness.
If we learnt to see in the dark, it doesn’t mean there was any more light. It just means we learnt to see in the dark.
The tragedy that is the names of men being knocked off from our lists of recommendations isn’t separate from the painfulness of readers such as myself not setting out to discover new writers who only deserve, likely deserve more so, to be cherished.
Because unless that happens, the darkness isn’t going to go away.
I shouldn’t be sad, leave alone surprised, that Vonnegut was an asshole. His fiction was never actually a pinprick of light, and the claim that it ever was is only more fictitious.
To be sure, this isn’t a declaration to write off all male writers inasmuch as admitting some in order to exonerate all of them is simply #NotAllMen by another name.
Writing is my way to make sense of my self, and this has been an exercise towards realising that the elision of one author from the roster must needs be accompanied by the discovery and inclusion of another.
Elision alone would be pointless, even deleterious, much as a smaller dark room is just as full of darkness as a larger dark one.
Rajinikanth’s film 2.0, which released last year, was recently uploaded on Amazon Prime and I finally watched it in its entirety. It is a dumpster-fire of masculinity, sexism and misogyny, which is not surprising after Petta was what it was. 2.0 also goes one step further and confuses fantasy for license to peddle pseudoscience, ultimately creating a movie that really tests the extent to which its viewers can suspend their disbelief.
One of the movie’s principal claims is that people possess “auras” composed of particles called “micro-photons” and that the “auras” have some kind of energy potential. Rajinikanth’s character then elaborates that a Russian scientist named Frank Baranowski has produced proof of their existence, that these “auras” can be rendered visible through a (simple) technique called Kirlian photography. The problem here is that a) everyone trusts the white guy more, and b) Frank Baranowski actually exists, and he’s been saying that people have “auras” and that godmen have bigger ones!
Fantasy is a form of fiction marked by creative imagination, frequently set in worlds and among peoples whose specific features have been invented to accentuate some narrative element that the author wishes to employ for effect. There are several types of stories within this parent genre that illustrate the different degrees to which fantastic elements make an appearance. But irrespective of their relative extremeness, fantasy stories are not classified as pseudoscience even though they may claim scientific value within the fiction’s narrative because they don’t attempt to explain the fantastic using the real. They explain the fantastic – should they have to – using only the fantastic.
Consider the example of Flatland, first published in 1884. In this book, the author Edwin Abbott Abbott describes a two-dimensional realm populated by men, who are lines, and women, who are points. It was intended as an allegory of life in the Victorian era and did not make specific claims as to the existence of such a realm in our physical universe. It remained allegorical from start to finish.
On the other hand, the Harry Potter series describes a secret world of wizardry hidden from our own by cleverly disguised magical barriers. Its books harbour as significant an element of the real as they do of the imagined, but when the fantastic is employed, the author makes no effort to ensure it is not mistaken for nonfiction because it is evident. This illustrates how even when the real and the imagined coexist, the author makes no attempts to breach the line that divides them, keeping the series in the same genre as Flatland. So while Harry, Ron and Hermione cross the magical gate into platform 9 3/4, the audience is given no reason to assume such a world really exists.
Different works of fantasy do this in different ways. A Song of Ice and Fire preserves the laws of physics so that dragons flap their wings like birds do to fly but is completely disinterested in how they might have evolved. Hulk and Spider-Man resort to ludicrous methods to make heroes of their protagonists but aside from some gibberish involving the words “radiation” and/or “gamma rays”, it isn’t clear why these men are what they are. Iron Man III asked us to believe one man built a particle accelerator in his basement and pushed right up against the wall between belief and disbelief.
But 2.0 tears this wall down, most pronouncedly in its attempts to explain what it believes is true. It seeks to justify itself and its choices using (questionable) information together with epistemological biases from the real world that make it seem as if its claims are legitimate. This is in bad faith: in the foreseeable future, there are always going to be people in the audience who may not be fully aware of where the real ends and the fantastic begins. But while fantasy fiction – as discussed – has always harboured the necessary implicit safeguards to maintain its qualification as such, S. Shankar – 2.0‘s writer and director – has ignored them and cheated.
The times demand pellucidity, so: Auras don’t exist. Micro-photons don’t exist. Neither auras nor micro-photons can be scientifically verified, insofar as science is defined as a way to systematically discover new information about the world and free it from cognitive biases to the extent possible. Frank Baranowski is mistaken. The products of Kirlian photography can be explained using a well-understood phenomenon called coronal discharge.
Indeed, ignoring its abject inability to surprise viewers given its cast of actors, 2.0 would have been a perfectly fine entertainer in the convention of Tamil cinema’s hero-fixated entertainers if it had dispensed with the self-justification. Shankar had to have known this, as much as he had to have known that the silver screen, for all its potential, is not an interface for dialogue. It is a one-way broadcast medium that does not brook disagreement in any forms other than commerce.
And by working his “aura” BS into a feature film in a way that betrays fantasy fiction’s purpose, Shankar has perpetrated what is at best a sleight of hand on 2.0‘s viewers, and a fraud at worst. I’m inclined to believe it’s fraud.
A black hole’s gravitational influence is a twisted – and twisting – thing, with many parts to it. We all know about the event horizon because of its wondrous ability to capture ‘even light’ within its envelope, keeping everything within trapped in absolute darkness for as long as the black hole lives. But beyond the event horizon, there is another region with equally – if not more – wondrous abilities that distorts the perception of reality in its own, unique ways. Since both their abilities are enabled by gravity, let’s begin there.
The gravitational force is actually an effect that objects seem to experience because of the shape of the spacetime continuum. All objects move on the continuum’s surface, and when the surface is bent, an observer sees the object moving as if on a curve. Such deformations are caused by massive bodies: the heavier a body, the more it bends the continuum around itself. So to the observer, it seems as if the heavy body is causing the lighter object to orbit itself.
Depending on the mass of the deforming body, this effect can be felt across vast distances. For example, Pluto orbits the Sun at an average distance of 5.9 billion km. So Pluto’s average orbit indicates the deformation that an object as heavy as Pluto experiences due to the Sun (and other planets as well as the Kuiper belt) at that distance. According to the laws of Newtonian gravitation, the force’s strength falls off by the square of the distance. So if the force between two bodies is X at a distance of Y, it will be X/4 at a distance of 2Y (assuming the gravitational constant is the same at Y and 2Y). However, the strength never falls to zero unless the objects are infinitely far from each other.
Now, if Pluto wanted (for some fantastical reason) to exit its orbit, it would have to move at a certain velocity to escape it. Say it was the Death Star there instead of Pluto, and the Death Star has thrusters. It would have to fire those thrusters to accelerate itself to such an extent that its speed grows beyond the limit at which the Sun can hold Pluto there by its gravity.
The fundamental set up is the same when it comes to a black hole, but the numbers are more extreme. When you look at a black hole, you’re actually seeing its event horizon. The black hole’s gravitational pull itself emanates from a point at its centre called the singularity. This singularity deforms the spacetime continuum in unimaginable ways, although it becomes more and more imaginable the farther you get from the centre.
The event horizon is the distance at which the continuum is deformed in such a way that you’d have to travel faster than at the speed of light to escape it – i.e., if you were caught right at the event horizon, even travelling at exactly the speed of light will only keep you on the event horizon, and not let you zip off into space. (Put differently: this would allow us to work out the speed of light in a given universe using the rules of basic gravitational physics and the sizes of black holes in that universe.)
This is also why the event horizon is the thing you see when you see a black hole: it’s a literal horizon of events. Events occurring on one side can’t be seen on the other because the light that carries the information that you ‘see’ can’t cross it or return. This in turn should prompt the question whether there is a region of space around the black hole where its gravitational effects can be felt but which doesn’t demarcate ‘points of no return’. The answer is yes; it’s called the ergosphere.
The name itself casts a very utilitarian gaze upon the idea – that it’s the region of space from which you can extract work from the black hole – but it’s true. The ergosphere is the region wherein the spacetime continuum has been deformed by the black hole to such an extent that you can enter it and leave if you travelled fast enough (but less than at the speed of light). However, even if the black hole’s effects from the singularity to the event horizon are outright warped, and the event horizon itself is an important – albeit arbitrary – boundary, the black hole’s effects in the ergosphere are still mind-bending.
A part of this is due to an effect of rotating black holes called frame-dragging. Imagine you’re (an immortal elf) looking at Pluto orbiting the Sun from somewhere near Mercury, through a stationary window that’s between the orbits of Neptune and Pluto. If you keep looking through the window, you’ll see Pluto pass by once every 248 years. Apart from the fantasy elements, this scenario is also physically possible because the window is practically stationary. The part of the spacetime continuum on which it rests, so to speak, isn’t in motion itself due to the Sun’s rotation. That is, there is a negligible amount of frame-dragging.
But this wouldn’t be possible in the ergosphere of a rotating black hole. Say you’re just above the event horizon, looking through a window in the distance at an object orbiting the black hole at the inner edge of the ergosphere. Frame-dragging would absolutely prevent the window from being stationary, together with you and the object as well. This is because the black hole’s prodigious gravitational pull – i.e. prodigious deformation of the continuum – is such that it doesn’t just deform the continuum but also drags it along as it rotates, in the direction of its rotation, in a very pronounced way.
As a result of such frame-dragging, anything sitting on that part of the continuum also seems to be moved along even if it didn’t have any velocity in that direction to begin with. It would be as if looking at your friend walking west-east on a boat that’s moving east-west at the speed of light: for all practical purposes, she might as well be walking east-west! This is why a rotating black hole will force an object angling in towards the black hole’s ergosphere from the opposite direction to appear to switch and move along in the direction of its rotation.
Note the use of ‘appear’: the object won’t actually be forced to alter its direction towards that of the black hole’s rotation. However, the changing arrangement of spacetime in the region together with the light coming from the object towards the observer will make it seem that way.
If, by the effect of some compulsion, an object insists on appearing stationary inside the ergosphere, it can but there’s a catch. If it is inside the ergosphere but above the event horizon, the object has no option but to be frame-dragged. But just like the event horizon is the surface you’d travel for eternity if you travelled at the speed of light, the ergosphere also has a surface where you can avoid being frame-dragged if you were moving at the speed of light. This is simply called the ergosurface.
(Trivia: It’s possible to explain the effects of gravity outside the ergosurface using Newtonian physics. Inside it, however, you’ll need the theories of relativity.)
The location of both envelopes – the event horizon and the ergosurface – is determined by the speed of light. Their shapes are also determined by common factors: the black hole’s mass and angular momentum*. However, they aren’t affected similarly. For example, a non-rotating black hole will have a spherical event horizon but a rotating black hole will have an oblate event horizon. On the other hand, a non-rotating black hole will not have an ergosurface whereas a rotating black hole will have anything between an oblate and a pumpkin-shaped ergosurface.
These are just some of the reasons the shadow of the black hole at the centre of the M87 galaxy looked the way it did in the image composed by the Event Horizon Telescope (EHT). Aside from the way it was obtained (using techniques like VLBI), the image contains many distortions that originate from the black hole itself, so interpreting it isn’t a straightforward activity.
The EHT only recorded and studied radiation that could come away from the black hole, with a lot of matter accumulating beyond that point and falling into the hole. So what we’re looking at in sum is that hot and magnetised matter, all their radiation and the Doppler effects on them, the effects of the ergosphere frame-dragging them, and then the shadow of the event horizon.
The idea that you can extract work from within the ergosphere, thus giving the region its current name, can be traced to a few examples that different scientists have spelled out over the years. The three most-well-known examples are the Penrose mechanism, Hawking radiation, and the Blandford-Znajek process. The case of Hawking radiation is easiest to explain (only because it’s been done enough times in the popular press for one to be able to access it immediately), but understanding it provides insights into the Penrose alternative as well.
The vacuum of deep space isn’t a true vacuum: it contains some energy, including electromagnetic energy from distant stars, that is often transformed into a particle-antiparticle pair. That is, these particles are condensations of energy that pop into existence and pop back out as energy again (here’s a more detailed yet accessible primer) in a very short span of time. It’s possible that this process also happens near black holes simply because it can. And when it does, something strange follows.
If such a particle pair pops into existence right above the event horizon, one of them could fall into the black and the other will be pushed off into the ergosphere. This push-off happens because of the law of conservation of momentum, and the energy carried by the pushed particle will be a teeny, tiny bit transformed from the black hole’s mass. To a distant observer, it will look as if the black hole has just emitted a particle and lost a little bit of its mass to do so. Stephen Hawking and Jacob Bekenstein first predicted this phenomenon, since called Hawking radiation, in 1974. When this process happens over and over, over many eons, a black hole could possibly have lost all of its mass and evaporated completely into nothingness.
The British mathematical physicist Roger Penrose proposed a somewhat similar idea that was also relatively more practicable (and was used in the film Interstellar as well). As Suvrat Raju, a theoretical physicist at ICTS Bangalore, explained to me: Say an object – like a boulder – is thrown into the ergosphere. When it nears the event horizon, it is caused by a deliberate mechanism to break up into two pieces such that one piece falls into the event horizon in the direction opposite to the black hole’s rotation. As a result, the other would get accelerated in its journey through the ergosphere by a ‘kick’ from the black hole.
If orchestrated correctly, the kicked piece can emerge from the ergosphere with more energy than it had going in – energy provided by the black hole by converting some of its mass. Scientists have already worked out the average achievable energy gain in each Penrose mechanism attempt to be around 21%.
“In classical processes, one can never reduce the area of the black hole, but the Penrose process can reduce its mass,” Suvrat further told me. “The science fiction fantasy is that a sufficiently advanced civilisation could use rotating black holes for waste-disposal and even get some energy out in the process through the Penrose process.”
The Blandford-Znajek process is less crude and more… involved. Say a star got a bit too close to a black hole and is being shredded into bits that fall into orbit around the event horizon. Friction between these bits heats them up to a very high temperature, pushing them into a plasma state of matter. These bits also harbour electric and magnetic fields, and the electric and magnetic field lines pass through them even as they swirl around the monster and fall closer and closer.
At this point, let me quote the following coursework material, by Daniel Nagasawa, Stanford University in 2011:
The premise itself is that the material accreting around a black hole would probably be magnetised and increasingly so as the material gets closer to the event horizon. In fact, the magnetic field is so large that it will accelerate an electron to the point where it will begin to radiate gamma-rays, provided that the electron is not beyond the event horizon. In essence, the black hole acts as a massive conductor spinning in a very large magnetic field produced by the accretion disk, where there is a voltage induced between the poles of the black hole and its equator. The ultimate result is that power is dissipated by the slowing down of the rotation of the black hole…
To extract energy in this scenario, one way – as posited by user CapnTrippy on Everything2 – is to build a superconductor orbiting over the black hole’s poles such that it can intercept and carry away some of the current flowing from the equator to the poles, instead of letting it be deposited in the plasma in the ergosphere. Vis-à-vis the black hole itself, this electrical energy has two sources: its rotational energy and that imbibed by the plasma. Since a black hole can carry up to 29% of its total mass as its rotational energy, that’s also the maximum possible energy that can be extracted in this process. It’s not great but it’s still fantastic because black holes often weigh enough to be able to supply power for ages on end. According to Nagasawa,
… for a 108 solar mass black hole with a 1 T magnetic field, the power generated is approximately 2.7 × 1038 W. In perspective, the annual energy consumption of the world is estimated around … 5 × 1020 J. The example case presented produces more energy in a single second than the entire globe consumes in a year. While this is a bold claim to make, it is only an example case where not all the energy produced is extractable as usable energy. However, at that point, even a system which is less than 10-15 % efficient would be sufficient to supply enough energy to power the world for a full year.
The Blandford-Znajek process remains a subject of active research to this day. A part of this is thankfully because of a reason that has little to do with powering Earth: relativistic jets. These are extremely powerful and narrow beams of radiation travelling at nearly the speed of light that astronomers have observed in space. Astrophysicists believe that the Blandford-Znajek process and the Penrose mechanism can together explain how they’re formed and shot off from the poles of supermassive rotating black holes, and travel billions of kilometres. In fact, the galaxy CGCG 049-033, located 680 million lightyears from Earth, is thought to host a black hole weighing 2 billion solar masses that’s shooting jets a staggering 1.5 million lightyears into space.
So next time you read about black holes, don’t let the event horizon steal all the limelight (even literally). There’s action and drama above its surface as well, where things are still visible while behaving in strange ways, where a gallery of plasma, energy fields and a moving continuum exposes the black hole’s gravitational artwork to the full view of the universe. Just remember that what you see is not what you get.
*This is a result of the no-hair conjecture: that all properties of all black holes can be determined by their mass, charge and angular momentum alone. However, because gravity is 1036 times weaker than the electromagnetic force, black holes with significant charge are thought not to exist, leaving only the mass and the angular momentum to influence their physical surroundings.
Featured image: This artist’s concept illustrates a supermassive black hole weighing millions to billions of times the mass of our Sun. Credit: NASA/JPL-Caltech.
Of late, there has been a clutch of Tamil films that have endeavoured to show the Hindu right-wing in poor light, associating its rituals with violence and oppression. The two most notable examples are Kaala and Petta, both starring Rajinikanth. Kaala was a modern adaptation of the Ramayana but told as if from Ravana’s point of view, although far from being an attempt to legitimise a ‘demon’ king, it is a story of a Tamil leader from Dharavi who fights off a Hindu thug. Petta on the other hand was less politically aware and more inclined to be entertaining, and found easy villains in the gau rakshak. So far so good.
However, a problem quickly arises in Petta that doesn’t in Kaala, nor in Kabali, also starring Rajinikanth and also directed by Pa. Ranjith, and Kaala‘s thematic predecessor. Both Kabali and Kaala were anti-caste and pointedly targeted Hindutvawadis, who have discriminatory practices hard-coded into their spiritual culture, and so carefully guided their protagonists away from all the markers of conservative Hinduism.
Petta is not so careful. It is not hard to sell the idea that a right-wing extremist is a bad person to an audience in a part of the country that largely thinks of itself as the last bastion of resistance against Hindutva nationalism. However, and like most Tamil movies that feature themes of Hinduism, Petta legitimises astrology. In a scene at the beginning of the film, an astrologer tells a goon that his ‘bad time’ has started because Kaali (Rajinikanth) is en route, referring to ‘astrological conditions’ that are unconducive to success and/or fulfilment.
In so doing, it reveals that it is unmindful of the fact that a) astrology is a form of oppression, and b) astrology and right-wing extremism exist on a continuum. Aside from its pseudoscientific credentials, astrology derives its oppressive power from the following attributes:
It centralises knowledge in the hands of a few practitioners — who tend to be upper caste when they’re also high-profile — who don’t have any kind of accountability
It derives its authority from scriptural utterances whose authority cannot be questioned
It is deterministic and undermines human endeavour
Taken together, it is evidently a manifestation of the same superstitions and authoritarian tendencies that make right-wing extremism so potent, and so insidious. In turn, this renders Petta‘s positioning of the gau rakshaks hard to believe. If the gau rakshaks are one form of Hindu oppression, then Kaali is simply another, that somehow it is a question of kind and not degree when in fact it is one of degree.
To argue that one practice is harmless and the other is harmful would be to actively ignore the harm that festers in both of them, as much as a poisoned tree bears poisonous fruits. And while hypocrisies inhabit all of us, it is important that we acknowledge them instead of denying that they exist.
Late last week, I picked up Ram Guha’s Patriots and Partisans. I know shamefully little about India’s modern political history – before and after Independence – certainly beyond the virtual borders of its scientific and technological endeavours. And to someone as receptive to new ideas on this front as me, Guha’s writing is perfect: he’s lucid, coherent and – with kudos to his editor – his writing is well-structured. Two of the most interesting things I’m learning is M.K. Gandhi’s reformist beliefs of what it means to be a Hindu and the Gandhi family’s problems.
On the latter count, in a chapter entitled ‘A short history of Congress chamchagiri’ (Hindi for sycophancy), Guha elaborates:
The dynastic principle has damaged the workings of India’s pre-eminent political party, and beyond, the workings of Indian democracy itself. One manifestation … is the filling of important positions on the basis of [sycophancy] rather than competence. Another is that Mrs Indira Gandhi’s embrace of the dynastic principle for the Congress served as a ready model for other parties to emulate. With the exception of the cadre-based parties of left and right, the CPM and the BJP, all political parties in India have been converted into family firms.*
Here Guha proceeds to provide examples: the DMK, “now the private property of M. Karunanidhi and his children”; Bal Thackeray “could look no further than his son” given his “professed commitment to Maharashtrian pride and Hindu nationalism”; the mantle of leadership in the Samajwadi Party and the Rashtriya Janata Dal passed from “Mulayam’s party passed on to his son, and in Lalu’s party to his wife”, respectively. He continues:
The cult of the Nehru-Gandhis, dead and alive, is deeply inimical to the practice of democracy. For his part, Jawaharlal Nehru, following Gandhi, tried to base his policies on procedures and principles rather than on the force of his personality. Within the Congress, within the Cabinet, within the Parliament, Nehru worked to further the democratic, cooperative, collaborative ideals of the Indian Constitution. … Loyalty to the Leader, in person, rather than to the policies of her or her government – such was the legacy of Mrs Indira Gandhi, to be furthered and distorted by her progeny, and by leaders of other parties too. [And] What Indira did at the Centre was exceeded in the provinces…
This adherence to the dynastic principle, which Rahul Gandhi reminded us all of when he appointed his sister to lead the Congress’s fight in the BJP’s Uttar Pradesh bastion, leaves a bad taste in the mouth. And as Guha has articulated so well, those who practice it deserve to be suspected of being undemocratic, and have their beliefs and actions similarly tainted. There is no reason why the Congress should not be able to look beyond the immediate members of its core family.
The latest Star Wars teaser, for Episode IX: The Rise of Skywalker, is foreboding for the same reasons. Going beyond the franchise’s fixation on Western characters and insistence on keeping the protagonists white (to the unforgivable extent of casting Lupita Nyong’o and then using only her voice), the teaser suggests that Rey is a/the (?) new Skywalker. I’ll be as thrilled as anyone else if she was a new Skywalker, if the name becomes a label akin to the (clanless) Torchbearer in Star Trek: Discovery.
But if she turns out to be the new Skywalker, then the franchise’s writers will finally have completed their betrayal of the infinite purpose of the fantasy genre itself. They will have been utterly lazy – if not guilty of a form of creative manslaughter – if Rey turns out to be biologically related to the Skywalkers, broadcasting the message that either you’re royalty or you’re not, much like the Gandhis themselves have.
In fact, even if Rey doesn’t carry the Skywalker blood, and ‘Skywalker’ becomes a title that anyone can aspire to, it remains to be seen how Episode IX treats the dynasty itself: if it is afforded a soft landing and the luxury of a dignified exit (which seems likely given Luke’s farewell in The Last Jedi) or if it is brought down hard and blown to smithereens. Rather, and taking a step back, will the franchise endeavour to send any sort of clear message about the pitfalls of dynasty itself?
In my latest op-ed for The Wire, where I defended the criticisms of some people who called the EHT’s black hole picture too blurry, there are two lines that aren’t entirely true. This post attempts to clarify its underlying science as well as to defend it in the immediate context of the op-ed. The lines are in bold (emphasis added):
Compared to pictures of about-to-be-eaten food on Instagram and Hubble Space Telescope’s spectacular shots of distant cosmic events, the EHT’s image of the M87 black hole is blah. But this is a profoundly useless comparison; it wasn’t ever about matching up to the Double Negative gravity-renderer, for example. There is no historical record that anyone cares about that reads “first ever 60 MP image of a black hole”; if they are, then that is a case of the bottom scraping the bottom. One MP or 60 MP or 10 GP is a question of degree. What we have here is a question of kind.
The first part of the line isn’t entirely true. One MP or even maybe 10 GP might be a question of degree, but somewhere along this resolution road, the story becomes a question of kind. This is because – alluding to one of the “cosmic coincidences” that Shep Doeleman mentioned at the NSF presser announcing the image’s release – the black hole itself was of just the right size to allow itself to be imaged by the EHT in the frequency window that the latter was interested in.
If the black hole had been any smaller, any further away or not emitting the 1.3-mm radiation, or if the EHT’s baseline wasn’t high enough to achieve the necessary resolving power, the resulting image would’ve been even blurrier. A lot of things had to fall in place for it to be possible. If one of them had been out of place, or if the image had to be less blurry by a big enough factor, astrophysicists would’ve been tasked with building an EHT with a baseline greater than Earth’s diameter, which might’ve meant putting one of the telescopes in space. And that could’ve meant a change in kind, not degree. In effect, the emboldened lines from my op-ed are out of place.
However, for as long as we’re talking about having an image of a black hole at all – as opposed to having no image – complaining that it was blurry and not sharp is at best a trivial, and at worst an illegitimate, quibble. In this one historical moment alone, that the fact that the EHT’s telescopes were each operating at full tilt to obtained their part of the final image shouldn’t matter because we’re crossing over a point of no return. Before this line, such pictures didn’t exist. After the line, there are two kinds of questions of degree: one of high/low resolution images and other of how we’re organising our telescopes – Earth- or space-based – to acquire them.
The physicist David Thouless passed away earlier this month. I confess I didn’t know much about him or his work until he won a part of the Nobel Prize for physics in 2016. After that, I read up about his work and had my mind blown, mostly motivated by a phenomenon in condensed-matter physics called BKT transitions, where the ‘T’ stands for Thouless, as well as his contributions to our understanding of superconductivity. I’ve explained BKT transitions before (here), fairly simple to understand. I admired that Thouless was an imaginative physicist with the confidence to admit uncertainty and the conviction to consider possibilities that others might have thought crazy. His biography on the Nobel Prize website is very informative. Gautam Menon at Asoka University recently shared the following nugget on his Facebook page:
Reading this, I remember thinking that the 2016 Nobel Prizes chemistry were also particularly interesting, enough for me to temporarily set aside my issues with this quasi-institution. The physics prize went to three men who used deceptively simple ideas from geometry to explain quantum phase transitions. The chemistry prize went to three men who built nano-machines by assembling individual molecules in intricate ways. (And the literature prize went to Bob Dylan.) And like Thouless’s experience at his university narrated above, one of the chemistry laureates had a tough time as well.
J. Fraser Stoddart, who won a share of the chemistry prize for synthesising molecules linked like chain-links and for building a ‘molecular shuttle’, wrote an essay in 2005 that I can’t seem to find now, probably because the website it was hosted on was revamped after he won the prize and the URL was changed as a result. However, I’d quoted some portions of it in my article in The Wire, excerpted below:
In an autobiographical essay written in 2005, Stoddart outlines the way his brand of chemistry research evolved – had to evolve – for scientists to get to molecular machines. … [He] writes how one phase of his work began in 1981 at Sheffield University when he wanted to improve the performance of a herbicide and ended with his entry into molecular electronics – on the way synthesising machines called catenanes and rotaxanes.
A short time later, Jean-Pierre Sauvage and his lab at the Louis Pasteur University in Strasbourg, France, were able to show that there existed a simpler way of synthesising catenanes and rotaxanes en masse. As a result, Stoddart’s lab was able to produce the supramolecules they needed in large quantities as well as build on Sauvage’s method to improve them. This led to a breakthrough in 1991, when Stoddart’s team published a more efficient way to synthesise rotaxanes. And only a year later, the group was involved in the design and construction of bistable mechanical switches – molecules that could exist in two states like ‘on’ or ‘off’ depending on some external conditions.
But in order to get as far, the man had to reinvent himself and seek … international collaborations in the face of much resistance from his colleagues. He wrote, “Any successes, however modest, only seemed to engender envy and resentment amongst some of my senior and influential colleagues who would then go to any lengths to undermine my academic activities.” So in the 1980s, Stoddart started to write to the national press about “the wantonness and waste I witnessed all around me. I questioned the extremely high level of bureaucratic state control that accompanied the hierarchically manipulated allocation of financial and other resources to research in science and engineering in Britain”. … Eventually, he was able to persuade some university administrators to send his grant proposals to be reviewed in the United States. Stoddart’s lab then began actively collaborating with the Americans from around 1984.
However, his tribulations weren’t yet at an end. In the 1990s, Stoddart faced yet more resistance from the wider community of supramolecular chemists who refused to believe that nanoscale machines of the kind Stoddart had helped build could exist. … Though Stoddart doesn’t say he was ridiculed, he does say that “it was in response to a ridiculously high level of skepticism and criticism” that he fell back on the use of less exotic techniques to establish what he had accomplished was legitimate. After that, he was able to set up a lab at the University of California, Los Angeles, where he continued to work on building molecular machines into the late 1990s.
But despite these troubles, Stoddart also did not give up on working on what he found interesting, when he could just as easily have staved off the criticism and ill-will by shifting to research on more conventional topics. He concluded the same essay with the following words about the molecules he had synthesised to build the things he had wanted to build:
What will they be good for? Something for sure, and we still have the excitement of finding out what that something might be. And so the story goes on…
On April 1, a few days after India successfully completed its ‘Mission Shakti’ ASAT test, an editorial in the RSS mouthpiece Organiser read:
In the initial days, scientists had to fight hard to prove their mettle and significance of the research they were undertaking. … In the last few years, whether in space programme or in the case of defence modernisation, political leadership has given a free hand to the scientist to carry out their experiments and scientific fraternity has also responded explosively by giving us, in most cases, more than what was expected. …
As voters, we should also think about the future of Bharat and what is best for the future generations while voting. Instead of getting into rhetorics and sloganeering of yesteryears, who has the vision and constructive programme for the Bharat should be our primary consideration. Who can stand by the conviction of the masses is the key.
Clearly there’s lots to debunk here, and much else to ignore, but the author offers a peek inside a mind that suggests right-wingers as far afield as the RSS believe their government is facilitating the research enterprise more than standing in its way, and that the ASAT test is evidence that the government has allowed scientists a “free hand” to pursue blue-sky research. One cannot facedesk enough.
On April 3, the Indian Express carried a tidbit from this editorial in its ‘View from the right’ section with a surprisingly misguided title: “Scientific voting”. You can see what got my goat. What’s scientific about any of this?
It’s as if the Indian Express read the Organiser‘s drivel and walked away believing the mouthpiece had actually explained what it meant when it wrote, “We as voters should learn a lesson from the scientists which can become a guiding force for us while voting.” It didn’t; if it’s self-evident, it’s certainly not scientific. So I’m more disappointed that the Indian Express seems so clueless about what “scientific” actually means – ironically or otherwise – than with the Organiser, which – to be fair – hasn’t let anyone down.
In fact, if we’re looking for “a guiding force” from intellectual quarters, the Organiser will be pleased to know scientists issued a statement on April 3 that concluded thus:
We appeal to all citizens to vote wisely, weighing arguments and evidence critically. We appeal to all citizens to remember our constitutional commitment to scientific temper. We appeal to you to vote against inequality, intimidation, discrimination, and unreason.
(They haven’t said so in as many words but they’re asking the people to boot the BJP/RSS combine from the premises.)
Now, to be clear, nothing about this statement or its cohort of authors is ‘scientific’ either. In fact, it brings to mind a scene from the second season of Star Trek: Discovery, where Sylvia Tilly quips that adding “time” in front of anything makes it sound cooler, like “time bends”. Similarly, folks seem to believe prefixing the title of some activity with “scientific” makes it more… tenable? And such contentions of tenability then come with their own scientistic compulsions, such as to accuse those who have not voted a certain way of having been “unscientific”.
This is nonsensical. Claiming activity X has the optional attribute of being “scientific” is only as tenable as what people alreadyassociate with the scientific enterprise, and cultural, political and social forces influence this evaluation, not science or its method itself. So either you learn the historical/logical implications of what it means to be scientific (e.g. Mertonian norms) or you’re stigmatising the word to make a point you believe to be true but are too lazy to find out why.
In this context, it seems the two publications are sloughing scientists off as a discernibly separate section of society. That – ludicrous as it sounds – they have a way of voting that non-scientists don’t, or vice versa, and that according to both the Indian Express and the Organiser, the scientists have something to teach the non-scientists on this count. But these boundaries don’t exist: scientists vote like the rest of us because they’re one of us. And whoever is drawing these lines – whether out of malice or ignorance – should stop.