Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

New anomaly at the LHC

Has new ghost particle manifested at the Large Hadron Collider?, The Guardian, October 31:

Scientists at the Cern nuclear physics lab near Geneva are investigating whether a bizarre and unexpected new particle popped into existence during experiments at the Large Hadron Collider. Researchers on the machine’s multipurpose Compact Muon Solenoid (CMS) detector have spotted curious bumps in their data that may be the calling card of an unknown particle that has more than twice the mass of a carbon atom.

The prospect of such a mysterious particle has baffled physicists as much as it has excited them. At the moment, none of their favoured theories of reality include the particle, though many theorists are now hard at work on models that do. “I’d say theorists are excited and experimentalists are very sceptical,” said Alexandre Nikitenko, a theorist on the CMS team who worked on the data. “As a physicist I must be very critical, but as the author of this analysis I must have some optimism too.”

Senior scientists at the lab have scheduled a talk this Thursday at which Nikitenko and his colleague Yotam Soreq will discuss the work. They will describe how they spotted the bumps in CMS data while searching for evidence of a lighter cousin of the Higgs boson, the elusive particle that was discovered at the LHC in 2012.

This announcement – of a possibly new particle weighing about 28 GeV – is reminiscent of the 750 GeV affair. In late 2015, physicists spotted an anomalous bump in data collected by the LHC that suggested the existence of a previously unknown particle weighing about 67-times as much as the carbon atom. The data wasn’t qualitatively good enough for physicists to claim that they had evidence of a new particle, so they decided to get more.

This was December. By August next year (2016), before the new data was out, theoretical physicists had written and published over 500 papers on the arXiv preprint server on what the new particle could be and how theoretical models could have to be changed to make room for it. But at the 38th International Conference on High-Energy Physics, LHC scientists unveiled the new data said that the anomalous bump in the data had vanished and that what physicists had seen earlier was likely a random fluctuation in lower quality observations.

The new announcement of a 28 GeV particle seems set for a similar course of action. I’m not pronouncing that no new particle will be found – that’s for physicists to determine – but only writing in defence of those who would cover this event even though it seems relatively minor and like history’s repeating itself. Anomalies like these are worth writing about because of the Standard Model of particle physics, which has been historically so good at making predictions about particles’ properties that even small deviations from it are big news.

At the same time, it’s big news in a specific context with a specific caveat: that we might be chasing an ambulance here. For example, The Guardian only says that the anomalous signal will have to be verified by other experiments, leaving out the part where the signal that LHC scientists already have is pretty weak (4.2σ and 2.9σ (both local as opposed to global) in two tests in the 8 TeV data and 2.0σ and 1.4σ deficit in the 13 TeV data). It also doesn’t mention the 750 GeV affair even though the two narratives already appear to be congruent.

If journalists leave such details out, I’ve a feeling they’re going to give their readers the impression that this announcement is more significant than it actually is. (Call me a nitpicker but I’m sure being accurate will allow engaged readers to set reasonable expectations about what to expect in the story’s next chapter as well as keep them from becoming desensitised to journalistic hype.)

Those who’ve been following physics news will be aware of the ‘nightmare scenario’ assailing particle physics, and in this context there’s value in writing about what’s keeping particle physicists occupied – especially in their largest, most promising lab.

But thanks to the 750 GeV affair, most recently, we also know that what any scientist or journalist says or does right now is moot until LHC scientists present sounder data + confirmation of a positive/negative result. And journalists writing up these episodes without a caveat that properly contextualises where a new anomaly rests on the arc of a particle’s discovery will be disingenuous if they’re going to justify their coverage based on the argument that the outcome “could be” positive.

The outcome could be negative and we need to ensure the reader remembers that. Including the caveat is also a way to do that without completely obviating the space for a story itself.

Featured image: The CMS detector, the largest of the five detectors that straddle the LHC, and which spotted the anomalous signal corresponding at a particle at the 28 GeV mark. Credit: CERN.

The RDS-220 hydrogen bomb goes off. Source: YouTube

57 years after the mad bomb

Fifty-seven years ago on October 30, the most powerful nuclear weapon in the history of nukes was detonated by the Soviets. The device was called the RDS-220 by the Soviet Union and nicknamed Tsar Bomba – ‘King of Bombs’ – by the US. It had a blast yield of 50 megatonnes (MT) of TNT, making it 1,500-times more powerful than the Hiroshima and Nagasaki bombs together.

The detonation was conducted off the island of Novaya Zemlya, four km above ground. The Soviets had built the bomb to one-up the US and followed Nikita Khrushchev’s challenge on the floor of the UN General Assembly a year earlier, promising to teach the US a lesson (the B41 nuke used by the US in the early 1960s had a yield of half as much).

But despite its intimidating features and political context, the RDS-220 yielded one of the cleanest nuclear explosions ever and was never tested again. The Soviets had originally intended for the RDS-220 to have a yield equivalent to 100 MT of TNT, but decided against it because of two reasons.

First: it was a three-stage nuke and weighed 27 tonnes and was only a little smaller than an American school bus. As a result, it couldn’t be delivered using an intercontinental ballistic missile. Maj. Andrei Durnovtsev, a decorated soldier in the Soviet Air Force, modified a Tu-95V bomber to carry the bomb and also flew it on the day of the test. The bomb had been fit with a parachute (whose manufacture disrupted the domestic nylon hosiery industry) so that between releasing the bomb and its detonation, the Tu-95V would have enough time to fly 45 km away from the test site. But even then, the bomb’s 100 MT yield would’ve meant Durnovtsev and his crew would’ve nearly certainly been killed.

To improve this to 50%, engineers reduced the yield from 100 MT to 50 MT, and which they did by replacing a uranium-238 tamper around the bomb with a lead tamper. In a thermonuclear weapon – which the RDS-220 was – a nuclear fusion reaction is set off inside a container that is explosively compressed by a nuclear fission reaction going off on the outside.

However, the Soviets took it a step further with Tsar Bomba: the first stage nuclear fission reaction set off a second stage nuclear fusion reaction, which then set off a bigger fusion reaction in the third stage. The original design also included a uranium-238 tamper on the second and third stages, such that fast neutrons emitted by the fusion reaction would’ve kicked off a series of fission reactions accompanying the two stages. Utter madness. The engineers switched the uranium-238 tamper and put in a lead-208 tamper. Lead-208 can’t be fissioned in a chain reaction and as such has a remarkably low efficiency as a nuclear fuel.

The second reason the RDS-220’s yield was reduced pre-test was because of the radioactive fallout. Nuclear fusion is much cleaner than nuclear fission as a process (although there are important caveats for fusion-based power generation). If the RDS-220 had gone ahead with the uranium-238 tamper on the second and third stages, then its total radioactive fallout would’ve accounted for fully one quarter of all the radioactive fallout from all nuclear tests in history, gently raining down over Soviet Union territory. The modification resulting in 97% of the bomb’s yield being in the form of emissions from the fusion reactions alone!

One of the more important people who worked on the bomb was Andrei Sakharov, a noted nuclear physicist and later dissident from the Soviet Union. Sakharov is given credit for developing a practicable design for the thermonuclear weapon, an explosive that could leverage the fusion of hydrogen atoms. In 1955, the Soviets, thanks to Sakharov’s work, won the race to detonate a hydrogen bomb that’d been dropped from an airplane, whereas until then the Americans had detonated hydrogen charges placed on the ground.

It was after the RDS-220 test in 1961 that Sakharov began speaking out against nuclear weapons and the nuclear arms race. He would go on to win the Nobel Peace Prize in 1975. One of his important contributions to the peaceful use of nuclear power was the tokamak, a reactor design he developed with Igor Tamm to undertake controlled nuclear fusion and so generate power. The ITER experiment uses this design.

Source for many details (+ being an interesting firsthand account you should read anyway): here.

Featured image: The RDS-220 hydrogen bomb goes off. Source: YouTube.

Does the neutrino sector violate CP symmetry?

The universe is supposed to contain equal quantities of matter and antimatter. But this isn’t the case: there is way more matter than antimatter around us today. Where did all the antimatter go? Physicists trying to find the answer to this question believe that the universe was born with equal amounts of both. However, the laws of nature that subsequently came into effect were – and are – biased against antimatter for some reason.

In the language of physics, this bias is called a CP symmetry violation. CP stands for charge-parity. If a positively charged particle is substituted with its negatively charged antiparticle and if its spin is changed to its mirror image, then – all other properties being equal – any experiments performed with either of these setups should yield the same results. This is what’s called CP symmetry. CPT – charge, parity and time – symmetry is one of the foundational principles of quantum field theory.

Physicists try to explain the antimatter shortage by studying CP symmetry violation because one of the first signs that the universe has a preference for one kind of matter over the other emerged in experiments testing CP symmetry in the mid-20th century. The result of this extensive experimentation is the Standard Model of particle physics, which makes predictions about what kinds of processes will or won’t exhibit CP symmetry violation. Physicists have checked these predictions in experiments and verified them.

However, there are a few processes they’ve been confused by. In one of them, the SM predicts that CP symmetry violation will be observed among particles called neutral B mesons – but it’s off about the extent of violation.

This is odd and vexing because as a theory, the SM is one of the best out there, able to predict hundreds of properties and interactions between the elementary particles accurately. Not getting just one detail right is akin to erecting the perfect building only to find the uniformity of its design undone by a misalignment of a few centimetres. It may be fine for practical purposes but it’s not okay when what you’re doing is building a theory, where the idea is to either get everything right or to find out where you’re going wrong.

But even after years of study, physicists aren’t sure where the SM is proving insufficient. The world’s largest particle physics experiment hasn’t been able to help either.

Mesons and kaons

A pair of neutral B mesons can decay into two positively charged muons or two negatively charged muons. According to the SM, the former is supposed to be produced in lower amounts than the latter. In 2010 and 2011, the Dø experiment at Fermilab, Illinois, found that there were indeed fewer positive dimuons being produced – but there was sufficient evidence that the number was off by 1%. Physicists believe that this inexplicable deviation could be the result of hitherto undiscovered physical phenomena interfering with the neutral B meson decay process.

This discovery isn’t the only one of its kind. CP violation was first discovered in processes involving particles called kaons in 1964, and has since been found affecting different types of B mesons as well. And just the way some processes violate CP symmetry more than the theory says they should, physicists also know of other processes that don’t violate CP symmetry even though the theory allows them to do so. These are associated with the strong nuclear force and this difficulty is called the strong CP problem – one of the major unsolved problems of physics.

It is important to understand which sectors, i.e. groups of particles and their attendant processes, violate CP symmetry and which don’t because physicists need to put all the facts they can get together to find patterns in them, seeds of theories that can explain how the creation of antimatter at par with matter was aborted at the cosmic dawn. This in turn means that we keep investigating all the known sectors in greater detail until we have something that will allow us to look past the SM unto a more comprehensive theory of physics.

It is in this context that in the last few years, another sector has joined this parade: the neutrinos. Neutrinos are extremely hard to trap because they interact with other particles only via the weak nuclear force, which is much weaker than the name suggests. Though a few trillion neutrinos will pass through your body in your lifetime, maybe three will interact with the atoms in your body. To surmount this limitation, physicists and engineers have built very large detectors to study them as they zoom in from all directions: outer space, from inside Earth, from the Sun, etc.

Neutrinos exhibit another property called oscillations. There are three types or flavours of neutrinos – called electron, muon and tau (note: an electron neutrino is different from an electron). Neutrinos of one flavour can transform into neutrinos of another flavour at a rate predicted by the SM. The T2K experiment in Japan has been putting this to the test. On October 24, it reported via a paper in the journal Physical Review Letters that it had found signs of CP symmetry violation in neutrinos as well.

A new sector

If neutrinos obeyed CP symmetry, then muon neutrinos should be transforming into electron neutrinos – and muon antineutrinos should be transforming into electron antineutrinos – at the rates predicted by the SM. But the transformation rate seems to be off. Physicists from T2K had reported last year that they had weak evidence of this happening. According to the October 24 paper, the evidence this year is less weak almost by half – but still not strong enough to shake up the research community.

While the trend suggests that T2K will indeed find that the neutrinos sector violates CP symmetry as it takes more data, enough experiments in the past have forced physicists to revisit their models after more data punctured this or that anomaly present in a smaller dataset. We should just wait and watch.

But what if neutrinos do violate CP symmetry? There are major implications, and one of them is historical.

When the C, P and T symmetries were formulated, physicists thought they were each absolute: that physical processes couldn’t violate any of them. But in 1956, it was found that the weak nuclear force does not obey C or P symmetries. Physicists were shaken up but not for long; they quickly rallied and fronted an idea 1957 that C or P symmetries could be broken but both together constituted a new and absolute symmetry: CP symmetry. Imagine their heartbreak when James Cronin and Val Fitch found evidence for CP symmetry violation only seven years later.

As mentioned earlier, neutrinos interact with other particles only via the weak nuclear force – which means they don’t abide by C or P symmetries. If within the next decade we find sufficient evidence to claim that the neutrinos sector doesn’t abide by CP symmetry either, the world of physics will be shaken up once more, although it’s hard to tell if any more hearts will be broken.

In fact, physicists might just express a newfound interest in mingling with neutrinos because of the essential difference between these particles on the one hand and kaons and B mesons on the other. Neutrinos are fundamental and indivisible whereas both kaons and B mesons are made up of smaller particles called quarks. This is why physicists have been able to explain CP symmetry violations in kaons and B mesons using what is called the quark-mixing model. If processes involving neutrinos are found to violate CP symmetry as well, then physicists will have twice as many sectors as before in which to explore the matter-antimatter problem.

The Wire
October 29, 2018

Train ride

What makes a train ride a train ride? I regularly travel between Bangalore and Chennai, using the morning Shatabdi every time. These train rides are not easy to love even the Shatabdi’s coaches have giant glass-panelled windows that offer beautiful views at sunrise and sunset.

But the train’s features significantly dent the experience. The seats make the passenger feel like she’s on a flight: they’re arranged two and four to each side with armrests separating each one of them. The tube-lights glow flaccid white and the clientele contains a lot of the corporate class, often combining to give the impression that you’re travelling in a mobile co-working space (the only exception to this rule, from what I’ve seen, is the overnight mail).

Although quite a few families also use the train, its single-day-journey offering often means that you’ve got people travelling light, likely taking the Shatabdi back the next day or the day after, people who – in the Shatabdi’s absence – would likely have flown instead of taking a different train. The tickets aren’t cheap: between 800 and 1,100 rupees for the common class (including catering), so you’re rubbing shoulders with the relatively better off.

I don’t mean here to romanticise the often-poorer conditions in which many of India’s middle- and lower-classes travel as much as to suggest that the Shatabdi, through the Indian Railways’ efforts to offer a sanitised and expedited experience, simply ends up being clinical in its rendition. Even the Double-decker Express between Bangalore and Chennai, with a travel time only a couple hours more than that of the Shatabdi, is more germane. You’ve got tiffin-, snack- and beverage-vendors passing through the aisles every 10 minutes, the train stopping and staring every hour or so, and simply that many more people per coach that it’s compelling to pass the time in conversation with the person sitting next to you. On the Shatabdi, all you want to do is look out the window.

I really miss the trains where you sit by the window on the lower berth, looking out through the powder-blue grills at a blue sky; share food with the people around you (if you’re also carrying Imodium, i.e.); go to bed grumbling about the berths not being long or wide enough; be careful that your belongings aren’t nicked while you’re dozing; wake up at an ungodly hour to pee, finding your way through the aisle under a dark blue nightlight; and get off at whatever station amid a sea of people instead of at one relatively unpopulated end leading straight to the gate.

Travelling – even in the form of a short journey between two nearby cities – can, and ought to, be a journey of integration, whether with yourself or the world around. The Shatabdi, though a very useful service, can be potentially isolating to the frequent user. Its utilitarian nature is hard to separate from its form itself, and as a vehicle it is the perfect metaphor for all the things we find undesirable – yet necessary – in our own lives.

Personal notes on the Vinod Dua case

Background information + The Wire‘s statements on the issue:

  1. Panel Headed by Former SC Justice Aftab Alam to Examine Allegation Against Vinod Dua
  2. The Wire’s Handling of the Sexual Harassment Charge Against Vinod Dua

§

  1. I strongly condemn Vinod Dua’s statements vis-à-vis the #MeToo movement and the women who have spoken up against men and toxic masculinity. This is irrespective of The Wire‘s position on this issue.
  2. I deeply resent that Dua has been attempting to defend himself by claiming the allegations against him are efforts to malign his programme, ‘Jan Gan Man Ki Baat’, with The Wire. I hope he understands that I (and I suspect my colleagues, though I do not speak for them) will not defend him or his actions, irrespective of their legitimacy, if he cannot separate himself from his professional responsibilities.
  3. The Wire has not succeeded in claiming ownership of the narrative with the same vehemence that Dua has demonstrated.
  4. The moral and ethical impetus to suspend Dua became overshadowed by a processual constipation. There was no clarity on how to proceed from the beginning, and as the deliberations dragged on, I – as a member of an organisation, not as an individual – found it increasingly difficult to separate right from wrong and/or became increasingly bewildered about whether my own choices were consistently justifiable, from one day to the next. In other words, while an overarching compulsion to act against Dua persisted, I could determine neither its provenance nor its foundation.
  5. The Dua episode highlighted a central quasi-paradox of the #MeToo movement: its calling out of the failure of due process (excluding public naming and shaming in this definition), and therefore its rejection (starting from Raya Sarkar’s List), whereas the institution/reinstitution of due process was the sole recourse readily available to many managers. This conflict is not insurmountable but it required managers – men, in most cases – to introspect through neural pathways that in many cases did not exist.
  6. I will never understand why The Wire allowed Dua to record a video on its platform wherein he would be allowed to speak of the complaints against him. His seniority and his longstanding association with The Wire don’t matter to me and should not, in fact, to anyone in this context.
  7. What The Wire‘s statement denouncing Dua’s words in the video has failed to mention is that the week’s time Dua set for The Wire to conduct its investigation is nonsensical insofar as it wasn’t his place to do so, and it should have been openly refuted.
  8. To be a committed Indian left-liberal is not easy. If you are a man in particular, be ready to regularly confront – and be expected to resolve – cognitive dissonances, (inadvertent) hypocrisies and forgetfulness. If you are not familiar with the lingua franca, invest efforts to master it.
  9. English is an artful language. It is a weapon but it is more resourceful and effective as the sallet, pauldron and sabatons within which you will always be a knight in shining armour. Learn to use it as much as to see past it.
  10. If there is to be one concrete outcome of #MeToo as a sociopolitical movement, though I hope there is more than one, then it must be for employees at all manner of organisations to remake the work-space to eliminate these structural issues, and in the process better organise themselves as units that transcend the formal hierarchies within the organisations themselves.
Covers of the first two books of the Kharkhanas Trilogy. Credit: Wikimedia Commons

Catching up with the Kharkhanas tragedy

Can’t believe I’m so late to the party. It seems that a year ago, Steven Erikson put the Kharkhanas Trilogy on hold, delaying the publication of the third book. The second book, Fall of Light, came out two years ago and was a difficult read in many ways. More than anything else, it contained way more plots than did the first book, Forge of Darkness, while simultaneously leaving the last book with lots left to explain.

It was like Erikson had lost his way. If he was feeling unsure of himself as a result, I’m glad he’s temporarily shelving the project. It’s not good for readers if books in a series are going to be released with many years in between each instalment but that’s already happened: Forge of Darkness was published in 2012 and Fall of Light, in 2016. Right now, it’s more important for fans like me that Erikson find his mojo and just complete the canon before he dies.

Erikson has also announced (in October 2017) that said mojo quest will take the form of writing the first book in the more-awaited Toblakai (a.k.a. Witness) Trilogy. This is good news because Malazan fans have been more eager to read about the exploits of Karsa Orlong than those of the Tiste races, at least in hindsight and with the hope that the Toblakai story isn’t as frowzy and joyless.

I personally find Karsa to be a dolt and not among my top 50 favourite characters from the series. However, I do find him entertaining and expect the Toblakai Trilogy to be even more so given that the premise is that Karsa is going to rouse the Toblakai in a war against civilisation. Very like the Jaghut story but with less sneering, more cockiness. Hopefully it will prove to be the cure Erikson needs.

Erikson also mentioned that he had been demotivated by the fact that Fall of Light‘s sales were lower than that of Forge of Darkness. Though he initially attributed this to readers waiting for Erikson to finish writing the series so they could read it one go, he found he couldn’t explain the success of Ian Esslemont’s Dancer’s Lament with the same logic: Lament is the first book in the unfinished Path of Ascendancy series. He concluded readers were simply being fatigued by reading Fall of Light. I wouldn’t blame them: it was even more difficult to read than the midsection of Deadhouse Gates.

I’m also starting to dislike his tendency to include overly garrulous characters whose loquaciousness the author seems to want to use to voice his every thought. After a point (which is quickly reached), it just feels like Erikson is bragging. The Malazan series had the intolerable gas-bags Kruppe and Iskaral Pust. Fall of Light was only made worse by Prazek and Dathenar and their completely unnecessary chapter-long soliloquies; at least Kruppe and Pust did things.

This is another thing I’m wary of in the Toblakai Trilogy, although I doubt my prayers will be answered, because you could see Erikson had fun with Karsa in the Malazan series. In fact, more broadly speaking, I’m wary of any new Erikson epic fantasy book because though I know the world and the stories are going to be fantastic, his writing is tiring and his storytelling is more flawed than it otherwise tends to be when he feels compelled to expose, or soliloquise, rather than narrate.

Actually, forget wary – I’ve almost given up on it. Shortly before the release of Forge of Darkness, Erikson had written for Tor that he’s going to keep the trilogy more traditional and make it less of a critique of the epic fantasy subgenre than he did with the Malazan series. Look what it turned out to be. And I only say I’ve almost given up because I hope Erikson attributes Fall of Light‘s tragedy to a different mistake, but then why should he? I found the fencing metaphor from his Tor piece to be instructive in this regard:

As a long-time fencer I occasionally fight a bout against a beginner. They are all enthusiasm, and often wield their foil like a whip, or a broadsword. Very hard to spar with. Enthusiasm without subtlety is often a painful encounter for yours truly, and I have constant ache in hands from fractured fingers and the like, all injured by a wailing foil or epee. A few of those injuries go back to my own beginning days, when I did plenty of my own flailing about. Believe it or not, that wild style can be effective against an old veteran like me. It’s hard to stay subtle with your weapon’s point when facing an armed Dervish seeking to chop down a tree. The Malazan series wailed and whirled on occasion. But those three million words are behind me now. And hopefully, when looking at my fans, they are more than willing to engage in a more subtle duel, a game of finer points. If not, well, I’m screwed.

On the other hand, I’ve really enjoyed Esslemont’s writing, which thankfully has only improved since Night of Knives. I hope Dancer’s Lament continues this trend. I purchased it this morning and hope I can complete it and the next book, as well as a reread of some of Esslemont’s other books, by the time Erikson’s The God is Not Willing is published.

Credit: Aarón Blanco Tejedor/Unsplash

Climate fear

The Intergovernmental Panel on Climate Change recently published a report exhorting countries committed to the Paris Agreement to limit global warming to an additional 1.5º by the end of this century. As if this isn’t drastic enough, one study has also shown that if we’re not on track to this target in the next 12 years, then we’re likely to cross a point of no return and be unable to keep Earth’s surface from warming by 1.5º C.

In the last decade, the conversation on climate change passed by an important milestone – that of journalists classifying climate denialism as false balance. After such acknowledgment, editors and reporters simply wouldn’t bother speaking to those denying the anthropogenic component of global warming in pursuit of a balanced copy because denying climate change became wrongful. Including such voices wouldn’t add balance but in fact remove it from a climate-centred story.

But with the world inexorably thundering towards warming Earth’s surface by at least 1.5º C, if not more, and with such warming also expected to have drastic consequences for civilisation as we know it, I wonder when optimism will also become pulled under the false balance umbrella. (I have no doubt that it will so I’m omitting the ‘if’ question here.)

There were a few articles earlier this year, especially in the American media, about whether or not we ought to use the language of fear to spur climate action from people and governments alike. David Biello had excerpted the following line from a new book on the language of climate change in a review for the NYT: “I believe that language can lessen the distance between humans and the world of which we are a part; I believe that it can foster interspecies intimacy and, as a result, care.” But what tone should such language adopt?

A September 2017 study noted:

… the modest research evidence that exists with respect to the use of fear appeals in communicating climate change does not offer adequate empirical evidence – either for or against the efficacy of fear appeals in this context – nor would such evidence adequately address the issue of the appropriateness of fear appeals in climate change communication. … It is also noteworthy that the language of climate change communication is typically that of “communication and engagement,” with little explicit reference to targeted social influence or behaviour change, although this is clearly implied. Hence underlying and intertwined issues here are those of cogent arguments versus largely absent evidence, and effectiveness as distinct from appropriateness. These matters are enmeshed within the broader contours of the contested political, social, and environmental, issues status of climate change, which jostle for attention in a 24/7 media landscape of disturbing and frightening communications concerning the reality, nature, progression, and implications of global climate change.

An older study, from 2009, had it that using the language of fear wouldn’t work because, according to Big Think‘s break down, could desensitise the audience, prompt the audience to trust the messenger less over time and trigger either self-denial or some level of nihilism because what else would you do if you’re “confronted with messages that present risks” that you, individually, can do nothing to mitigate. Most of all, it could distort our (widely) shared vision of a “just world”.

On the other hand, just the necessary immediacy of action suggests we should be afraid lest we become complacent. We need urgent and significant action in both the short- and long-terms and across a variety of enterprises. Fear also sells. it’s always in demand irrespective of whether a journalist is selling it, or a businessman or politician. It’s easy, sensational, grabs eyeballs and can be effortlessly communicated. That’s how you have the distasteful maxim “If it bleeds, it leads”.

In light of these concerns, it’s odd that so many news outlets around the world (including The Guardian and The Washington Post) are choosing to advertise the ’12-year-deadline to act’ bit (even Forbes’s takedown piece included this info. in the headline). A deadline is only going to make people more anxious and less able to act. Further, it’s odder that given the vicious complexities associated with making climate-related estimates, we’re even able to pinpoint a single point of no return instead of identifying a time-range at some point within which we become doomed. And third, I would even go so far as to question the ‘doomedness’ itself because I don’t know if it takes inflections – points after which we lose our ability to make predictions – into account.

Nonetheless, as we get closer to 2030 – the year that hosts the point of no return – and assuming we haven’t done much to keep Earth’s surface warming by 1.5º C by the century’s close, we’re going to be in neck-deep in it. At this point, would it still be fair for journalists, if not anyone else, to remain optimistic and communicate using the language of optimism? Second, will optimism on our part be taken seriously considering, at that point, the world will find out that Earth’s surface is going to warm by 1.5º C irrespective of everyone else’s hopes.

Third: how will we know if optimistic engagement with our audience is even working? Being able to measure this change, and doing so, is important if we are to reform journalism to the extent that newsrooms have a financial incentive to move away from fear-mongering and towards more empathetic, solution-oriented narratives. A major reason “If it bleeds, it leads” is true is because it makes money; if it didn’t, it would be useless. By measuring change, calculating their first-order derivatives and strategising to magnify desirable trends in the latter, newsrooms can also take a step back from the temptations of populism and its climate-unjust tendencies.

Climate change journalism is inherently political and as susceptible to being caught between political faultlines as anything else. This is unlikely to change until the visible effects of anthropogenic global warming are abundant and affecting day-to-day living (of the upper caste/upper class in India and of the first world overall). So between now and then, a lot rests on journalism’s shoulders; journalists as such are uniquely situated in this context because, more than anyone else, we influence people on a day-to-day basis.

Apropos the first two questions: After 2030, I suspect many people will simply raise the bar, hoping that some action can be taken in the next seven decades to keep warming below 2º C instead of 1.5º C. Journalists will make up both the first and last lines of defence in keeping humanity at large from thinking that it has another shot at saving itself. This will be tricky: to inspire optimism and prompt people to act even while constantly reminding readers that we’ve fucked up like never before. I’d start by celebrating the melancholic joy – perhaps as in Walt Whitman’s Leaves of Grass (1891) – of lesser condemnations.

To this end, journalists should also be regularly retrained – say, once every five years – on where climate science currently stands, what audiences in different markets feel about it and why, and what kind of language reporters and editors can use to engage with them. If optimism is to remain effective further into the 21st century, collective action is necessary on the part of journalists around the world as well – just the way, for example, we recognise certain ways to report stories of sexual assault, data breaches, etc.

What the Nobel Prizes are not

The winners of this year’s Nobel Prizes are being announced this week. The prizes are an opportunity to discover new areas of research, and developments there that scientists consider particularly notable. In this endeavour, it is equally necessary to remember what the Nobel Prizes are not.

For starters, the Nobel Prizes are not lenses through which to view all scientific pursuit. It is important for everyone – scientists and non-scientists alike – to not take the Nobel Prizes too seriously.

The prizes have been awarded to white men from Europe and the US most of the time, across the medicine, physics and chemistry categories. This presents a lopsided view of how scientific research has been undertaken in the world. Many governments take pride in the fact that one of their citizens has been awarded this prize, and often advertise the strength of their research community by boasting of the number of Nobel laureates in their ranks. This way, the prizes have become a marker of eminence.

However, this should not blind us from the fact that there are equally brilliant scientists from other parts of the world that have done, and are doing, great work. Even research institutions do this; for example, this is what the Institute for Advanced Study at Princeton University, New Jersey, says on its website:

The Institute’s mission and culture have produced an exceptional record of achievement. Among its Faculty and Members are 33 Nobel Laureates, 42 of the 60 Fields Medalists, and 17 of the 19 Abel Prize Laureates, as well as many MacArthur Fellows and Wolf Prize winners.

What the prizes are

Winning a Nobel Prize may be a good thing. But not winning a Nobel Prize is not a bad thing. That is the perspective often lost in conversations about the quality of scientific research. When the Government of India expresses a desire to have an Indian scientist win a Nobel Prize in the next decade, it is a passive admission that it does not consider any other marker of quality to be worth the endorsement. Otherwise, there are numerous ways to make the statement that the quality of Indian research is at par with the rest of the world’s (if not better in some areas).

In this sense, what the Nobel Prizes afford is an easy way out. Consider the following analogy: when scientists are being considered for promotions, evaluators frequently ask whether a scientist in question has published in “prestigious” journals like Nature, Science, Cell, etc. If the scientist has, it is immediately assumed that the scientist is undertaking good research. Notwithstanding the fact that supposedly “prestigious” journals frequently publish bad science, this process of evaluation is unfair to scientists who publish in other peer-reviewed journals and who are doing equally good, if not better, work. Just the way we need to pay less attention to which journals scientists are publishing in and instead start evaluating their research directly, we also need to pay less attention to who is winning Nobel Prizes and instead assess scientists’ work, as well as the communities to which the scientists belong, directly.

Obviously this method of evaluation is more arduous and cumbersome – but it is also the fairer way to do it. Now the question arises: is it more important to be fair or to be quick? On-time assessments and rewards are important, particularly in a country where resource optimisation carries greater benefits as well as where the population of young scientists is higher than in most countries; justice delayed is justice denied, after all. At the same time, instead of settling for one or the other way, why not ask for both methods at once: to be fair and to be quick at the same time? Again, this is a more difficult way of evaluating research than the methods we currently employ, but in the longer run, it will serve all scientists as well as science better in all parts of the world.

Skewed representation of ‘achievers’

Speaking of global representation: this is another area where the Nobel Foundation has faltered. It has ensured that the Nobel Prizes have accrued immense prestige but it has not simultaneously ensured that the scientists that it deems fit to adorn that prestige have been selected equally from all parts of the world. Apart from favouring white scientists from the US and Europe, the Nobel Prizes have also ignored the contributions of women scientists. Thus far, only two women have won the physics prize (out of 206), four women the chemistry prize (out of 177) and 12 women the medicine prize (out of 214).

One defence that is often advanced to explain this bias is that the Nobel Prizes typically reward scientific and technological achievements that have passed the test of time, achievements that have been repeatedly validated and whose usefulness for the common people has been demonstrated. As a result, the prizes can be understood to be awarded to research done in the past – and in this past, women have not made up a significant portion of the scientific workforce. Perhaps more women will be awarded going ahead.

This arguments holds water but only in a very leaky bucket. Many women have been passed over for the Nobel Prizes when they should not have been, and the Nobel Committee, which finalises each year’s laureates, is in no position to explain why. (Famous omissions include Rosalind Franklin, Vera Rubin and Jocelyn Bell Burnell.) This defence becomes even more meaningless when you ask why so few people from other parts of the world have been awarded the Nobel Prize. This is because the Nobel Prizes are a fundamentally western – even Eurocentric – institution in two important ways.

First, they predominantly acknowledge and recognise scientific and technological developments that the prize-pickers are familiar with, and the prize-pickers are a group made up of all previous laureates and a committee of Swedish scientists. Additionally, this group is only going to acknowledge research that is already familiar with and by people its own members have heard of. It is not a democratic organisation. This particular phenomenon has already been documented in the editorial boards of scientific journals, with the effect that scientific research undertaken with local needs in mind often finds dismal representation in scientific journals.

Second, according to the foundation that awards them, the Nobel Prizes are designated for individuals or groups who work has granted the “greatest benefit on mankind”. For the sciences, how do you determine such work? In fact, one step further, how do we evaluate the legitimacy and reliability of scientific work at all? Answer: we check whether the work has followed certain rules, passed certain checks, received the approval of the author’s peers, etc. All of these are encompassed in the modern scientific publishing process: a scientists describes the work they have done in a paper, submits the paper to a journal, the journal gets the paper reviewed up the scientist’s peers, once it is okay the paper is published. It is only when a paper is published that most people consider the research described in it to be worth their attention. And the Nobel Prizes – rather the people who award them – implicitly trust the modern scientific publishing process even though the foundation itself is not obligated to, essentially as a matter of convenience.

However, what about the knowledge that is not published in such papers? More yet, what about the knowledge that is not published in the few journals that get a disproportionate amount of attention (a.k.a. the “prestige” titles like Nature, Science and Cell). Obviously there are a lot of quacks and cracks whose ideas are filtered out in this process but what about scientists conducting research in resource-poor economies who simply can’t afford the fancy journals?

What about scientists and other academics who are improving previously published research to be more sensitive to the local conditions in which it is applied? What about those specialists who are unearthing new knowledge that could be robust but which is not being considered as such simply because they are not scientists – such as farmers? It is very difficult for these people to be exposed to scholars in other parts of the world and for the knowledge they have helped create/produce to be discovered by other people. The opportunity for such interactions is diminished further when the research conducted is not in English.

In effect, the Nobel Prizes highlight people and research from one small subset of the world. There are a lot of people, a lot of regions, a lot of languages and a lot of expertise excluded from this subset. As the prizes are announced one by one, we need to bear these limitations in mind and choose our words carefully, so as to not exalt the prizewinners too much and downplay the contributions of numerous others in the same field as well as in other fields and, more importantly, we must not assume that the Nobel Prizes are any kind of crowning achievement.

The Wire
October 1, 2018