It was recently my birthday. I turned 30. The celebrations were muted – if at all – because there’s something of a moment when you exit the tweens, and the first digit of your age changes from 2 to 3. On that day, it seemed more pertinent than ever to think of the occasion as ‘just another orbit around the Sun’. To further blunt the moment, I told myself I was only turning 3.94 galactic seconds old, no biggie.

Time is a strange thing, but let’s not belabour the point. Only two statements should suffice to spotlight its strangeness. First, mathematics does not cognise time as an entity in and of itself far beyond thermodynamics: heat flows from a hotter object to a cooler one. The universe was really, really hot 13.8 billion years ago. One day, many billions of years from now, it will go really, really cold and – somewhere in the maze of our equations – time will die. On that day, your birthday will have no meaning. At long last.

Second, there is no absolute time, unless you arbitrarily fix one, because the experience of time is influenced by so many things, such as the speed at which you’re moving and your position in a gravitational field. This experience – which scientists have measured using atomic clocks in space – comes straight from Albert Einstein’s special theory of relativity.

The last twelve months witnessed a lot of discussion among scientists on time’s nature and properties. Like they were at the start of 2018, the arrow of time remains just as mysterious, and time-travel, just as fascinating. It also matters that our experience of time is so essentially subjective, so much so that we wouldn’t have to measure time if weren’t also trying to keep track of something else… Of what value is an 8 am on a Monday if it didn’t portend the opportunities of the next 14 hours? Of what value is life if you’re not going to die?

Of course, when almost every encounter with this dazzling subject ends in poignant moments of wonder, there’s a good chance the other encounters are in with confusion.

Every object that exists experiences a moment called ‘now’. But you’re not always going to be able to have all the information about all those experiences simultaneously in your ‘now’. This condition owes itself to the speed of light: a fixed constant throughout the whole universe.

If you’re looking at a tree 10m away, light scattered by the tree is going to reach your eyes in 0.0000000333564095 s (assuming the speed of light is the same in the troposphere and in a vacuum). In other words, you can get status updates about the tree once every 0.0000000333564095 s. This delay is practically meaningless and can be neglected without consequence.

But when you’re corresponding with a spacecraft billions of kilometres away, the signals are going to take many hours each way. Case in point: the New Horizons space-probe, a NASA mission that flew past Pluto in 2015. It’s currently 6.6 billion km from Earth. The one-way signal time – i.e. the time taken by signals sent from Earth to reach the probe, and by the probe to reach Earth – is a little over 6 hours and 6 minutes. In this picture, the probe sends an update and receives instructions on what to do next 12 hours and 12 minutes after transmission.

It’s a life lived in the 12-hour long moment.

How do you measure time here? You’ve got two frames of reference: Earth and the probe, tracked in Coordinated Universal Time (UTC) and spacecraft-event time (SCET). These two timezones, in a manner of speaking, can be converted to each other by adding or subtracting the time taken by light to travel between them. For example, if mission control transmits a signal to New Horizons at 12 am UTC, it’s going to reach the probe at 6.06 am UTC.

Where it gets a bit trickier is when a probe records an event in SCET, and mission control has to figure out when exactly the event occurred in UTC. On January 24, 1986, the Voyager 2 probe studied Uranus (from a distance 11.5x the planet’s radius), and recorded a Bernstein emission at 1315 SCET. I’ll let you figure out the exact time at which this event occurred from the point of view of an astronomer working in Ooty.

Evidently, we’re always somewhere in between confusion and wonder and, to be honest, it’s not a bad place to be at all. But on an orthogonal axis, we’re between the profound and the mundane at the same time. You can “O, wonder! How many goodly creatures are there here! How beauteous mankind is!” all you want, it’s still going to be 11 pm and time to catch the last metro home.

These are two different universes of discourse, though to their credit they’re not mutually exclusive. And the only choice you’re likely to have is between being condemned to visit all its states or celebrating the inherently unknowable adventure it could be.

Here’s hoping your 2019 goes all over this graph.

Advertisements

The Print should not have published its list of “intellectuals pick their successors” at all. Its editors knew that it had no women and were aware of how that was a problem. They also had to have known that the list was predominantly Hindu and upper-caste. But by publishing it, The Print signalled that it still wanted to attract responses, to display to the world that it had attempted such an exercise, to salvage from its complete failure something that it could still publish and draw attention to itself, and to finally broadcast its atonement by publishing other pieces that corrected its mistakes. The whole exercise stinks.

The latest such piece of atonement is a list of women intellectuals curated by Salil Tripathi. He writes in his piece:

I am glad ThePrint produced its list; it made us think of what such a list should look like. The lists won’t change anything. But if such a list leads us to step out of our comfort zones and read—or familiarise ourselves with—the works of those we haven’t known, it would have made an interesting contribution.

I’m curious about how the women on Tripathi’s list feel about their inclusion in such an exercise, considering its flawed provenance. I myself smell something patronising, though I’m unable to put my finger on it.

To be sure, Tripathi’s is a resourceful compilation. I’ve read the writing of some of the women listed there and they’re all must-reads. But it is disappointing that Tripathi’s list didn’t exist until The Print‘s did, and The Print‘s list wouldn’t have existed if not for the apathy of its editor(s). Even by publishing pieces that call out its own mistakes, The Print hasn’t exonerated itself. It is still only engaging in a profoundly useless exercise: the cycle it has initiated and is participating in is of its own making, a bad case of a journalism platform fabricating the news instead of reporting it.

I can think of at least four different words newsrooms use to describe the bundles of content they work with: story, piece, article and copy (‘content’ itself isn’t one of them). With a few exceptions, all four labels are used interchangeably. ‘Copy’ is perhaps the most common, especially since most copy-editors use that name for the thing they work their magic on, but so is story for its gentle glamour. The question is whether this orgy of labels is actually a problem or just a triviality.

(A lot more people in J-schools say they “want to produce longform journalism” than the number engaged in finding good things to write about first. That an aspiration like this even exists – and was nurtured without question at the J-school I attended – was the first sign that the glamour had overtaken the substance.)

Put another way: is dissecting the labels a useful way of looking at the world?

Some time in 2012, my heresy about the ‘Columbia style’ – my name for a ‘narrative technique’ that began by introducing a protagonist, following them for three or four big paragraphs, before introducing the meat of the matter through the protagonist’s pain – took root. I called it ‘Columbia style’ at the time because many of those who practised it in India had graduated from the Columbia School of Journalism.

When a piece is written like this, it is decidedly a story, at least in the mind of the writer. As many people have pointed out, stories have a beginning, a middle and an end. ‘Columbia style’ pieces do too, as well as a protagonist through whose eyes the reader is to find footing and guidance, and a conclusion that I bet ends with eyes set on the horizon with a heart full of hope.

However, most practitioners of this form don’t use it right. To work the ‘Columbia style’, you need a very, very good story first – one that lends itself to dramatic narratives. E.g., I’ve always thought stories of agrarian distress shouldn’t be laid out this way but they often are. Second, the ‘Columbia style’ implicitly demands that the piece be a feature article, 2,000 words or longer (in The Wire‘s lexicon). So it would be technically sound if a writer adopted the ‘Columbia style’ after they’d found a suitable narrative, but what usually happens is journalists adopt the style first and then set about reporting, eventually producing something with little substance and a lot of fluff.

Earlier today, Rosen – with a short thread on Twitter – laid out his issues with framing the news as stories. He argued that when journalists become storytellers, they open themselves up to being lured by the “seduction of the narrative” and, second, let the story’s needs edge out space that needs to be devoted to other “central” components of good journalism: “truth-telling, grounding public conversation in fact, verification, listening”, etc. To be sure, and at the risk of repetition, this criticism is directed at those who report to write stories and not at those who set out to report, find a story and then determine if can be narrated in a certain form.

To paraphrase Jeff Jarvis from his Christmas Day article that Rosen cites, we must ask ourselves “whether our compulsion to make news compelling (yes, entertaining) leads us astray.” He elaborates: 

The real problem is that we have let our means of production determine our mission rather than the other way around. I hear journalists say their primary role is as storytellers. No. I hear them say their task is to fill a product – a newspaper or magazine or show. No. Our job is to inform the public conversation. And now that we can hear people talking and join in with them, I’ve updated my definition of journalism to this: to convene communities into civil, informed, and productive conversation. This means our first job is not to write but to listen to that conversation so we can find what it needs to function. Then we report. Then we write – or convene or teach or use other forms now available to us. First listener, not storyteller.

In short: when we intend to tell stories, we close ourselves off to parts of the conversation that we don’t think can fulfil the story’s needs.

I find some consolation in this conclusion because I’ve felt similarly before and it’s nice to discover Rosen and Jarvis have as well: science stories – particularly in the ‘Columbia style’ – are typically about science’s connection to the human condition in some form or other. It’s quite difficult to frame a science-related issue as a story that doesn’t have this connection. Stories are fundamentally human. So when reporters preemptively gravitate towards this narrative style, they also preemptively – and often heedlessly – begin to ignore details that may not help them write these stories, details that have nothing to do with the human condition. A lot more on this here.

There are of course those labelled ‘great storytellers’, and you might argue that even if you didn’t have an awesome story to tell, you could spin an ordinary one that way if you excelled at the telling. This argument seems to have some logic in theory but I’ve never seen it done in practice.

Postscript: I’m also glad that Jarvis said the following (emphasis added): “… our first job is not to write but to listen to that conversation so we can find what it needs to function. Then we report. Then we write – or convene or teach or use other forms now available to us.”

In his definition, a lot more people become journalists because it precludes novelty of information and because it decouples the activity from how we choose to disseminate what we’ve found. The latter isn’t only about multimedia journalism but also about context-specific measures. For example, I consider science explainers to be a form of science journalism because science education is a bit of a disaster in India.

In all the DnD games I’ve played, I’ve felt there’s a tension between allowing the story to progress and the characters all helping each other participate in that progression. For example, we as players play a game because we want to enact a story even while we as a group compose its microscopic details. So the players’ intention is aligned with the DM’s.

However, I often feel that many stories often go very quickly from the introductory session to the more important parts of the story, forcing characters to cooperate for the story to progress, irrespective of whether the characters have sufficient incentive to do so. In other cases, the story would be solid but the other players will have essayed their characters in such a way that your character begins to resemble a tool to solve a problem rather than persist as a person of feelings.

One way or another, either the character feels clumsy to the player or the player begins to inject their resentment into their character – and ultimate both are opposed to the DM’s intention.

Role-playing is fun because it is – among other things – an exercise in handling this tension well. Successful role-playing is not just wearing the skin of your character but also contributing to the game without also tripping it. Some players are better at it than others.

One way this proficiency could shine through is when you’re good at keeping your in-character responses to in-game stimuli both appropriate and useful at the same time. Other players – such as myself – reveal they might be struggling with it when their spontaneous responses are out of line and they don’t realise it until after a session concludes.

Recently, I was able to identify a few attributes that could be interfering with how I handle this tension, to the extent that my characters often seem difficult to play with. One of them is my cynicism. I’m more cynical in real-life than a person of my age ought to be, a tendency fuelled by the way I think about the world as a journalist. It’s difficult to hope and hope again when cynicism pays off as often as it does. It’s also made for a surprisingly successful way to cope with the near-constant influx of bad news.

Some would argue that we must be skeptical instead of cynical, and nurture the ability to hope even in the face of a shit-stream. I think that’s simply using language to blunt our sharper edges. I have hope, too, for better days but I’m not surprised when they don’t turn up.

Anyway, such cynicism doesn’t directly drip down into my characters – it’s the way I handle situations as a result that does. My characters don’t purport to be cynical people but they inadvertently behave like cynical people do.

Often this manifests in two specific ways: a) unrelenting mistrust in the face of novelty, and b) evaluating each situation from scratch without factoring in the character’s emotional trajectory thus far. They regularly conflict with the DM’s intentions a) by conjuring mini-instances in the game where other players have to expend time to persuade me of what was always going to be the outcome, and b) by rendering my character somewhat, if not entirely, unpredictable (thanks to Chitralekha Manohar for helping flesh this out).

I never doubted that playing DnD would contribute to my overall character development (pun obviously intended), so this is an opportunity to figure out how I can chamfer my cynicism – and some ego with it – in the real and the fantastic at once. The question is how.

On a more superficial level, it requires acknowledging that DnD is a distinctly different form of storytelling than reading a novel is, and more similar to writing one. A character playing the game might not know what will come next – maybe not even what ought to come – as much as the DM will. But this information shortage doesn’t free characters from the responsibility to keep the plot moving.

As a result, there’s no way their responses to different stimuli are going to be 100% optimal all the time. It’s likely to be mostly suboptimal, and that’s okay. It’s like a novel-writing exercise but it’s never going to be identical to it. As Chitralekha said,

Because the dice play such a big role in DnD, meaning is made retrospectively, and events themselves are not logical. If you’re supposed to be super-stealthy but roll a 1, you retrospectively explain it saying, for example, that your shoe laces weren’t tied.

On a deeper level, the process of being a good player and a predictable character (as a proxy for an understandable character) can be catalysed if there exists a fixed set of rules to play ‘right’. Resolving the overall tension itself (as opposed to solving any other player/character problems) holds some clues to this. To recall:

[Mistrust in the face of novelty] regularly conflicts with the DM’s intentions … by conjuring mini-instances in the game where other players have to expend time to persuade me of what was always going to be the outcome.

Chitralekha described this situation as one in which the character who needs to be persuaded hijacks the DM’s control of the game and becomes a DM themselves. I like this way of framing the problem because it doesn’t propose to ‘solve’ the tension by getting rid of it, but acknowledges its existence and hints at a reasonable way out. For instance, as she said, a character can’t assume this mantle without also assuming the DM’s traditional responsibilities: to keep the story moving, to encourage the characters to act in a way that the DM wants them to but without forcing them to do so, and to ensure the players are having a good time.

Everyone must acknowledge that these conditions actively disqualify characters from being asocial, antisocial, attention-seeking or engaging in any socially non-cooperative behaviour in general. These are all flaws – except maybe the first one – that we encounter in other people in many ways on a regular basis. But in a DnD game, they give rise to decisions on the part of characters that are counterproductive to the DM’s wishes.

Let’s abstract this: is there a guiding rationale that unites the DM’s responsibilities? Chitralekha identified one, that DMs always reward good behaviour and punish – at least not-reward – bad behaviour in fair and appropriate ways. Are there any other ways, and if so, what could they be?

I haven’t ever been more interested in anything than physics and epic fantasy. So I thought it might be interesting to think about whether they complement each other.

Being a science writer writing about things like condensed-matter physics and high-energy physics has taught me a lot about these subjects. But more importantly, in the course of repeatedly interrogating new findings in these subjects and explaining them from the ground up, the vocation has allowed me a glimpse at what the foundations of reality might look like.

Many areas of scientific endeavour, theoretical and experimental, are way more precise than others. We know more about how a chemical reaction between two well-characterised compounds will proceed in different environments than what the wave-function defining their most fundamental constituents – individual particles – actually is. However, this disparity doesn’t usually prevent us from working with the objects to which their underlying theories apply.

Whether a wave-function is a mathematical object or not doesn’t matter to an engineer building a bridge. It’s not a useful way for them to look at the world. Put another way, science doesn’t provide – or hasn’t yet provided – a single, unified way to make sense of reality. Where we choose to draw a line in the sand between ‘true’ and ‘not yet true’ varies from one setting to another.

These lines aren’t always drawn simply according to the availability and quality of data, and that’s not a bad thing either. At their roots, our choices about could be ‘useful’ are guided also by whether what we’ve found is tractable in our theories, abides by conditions like falsifiability, corresponds well with older ideas used to study the problem, maybe even how far the finding is removed from subjective judgments of plausibility. Many physicists also use aesthetic tests like naturalness and beauty to determine if what they’ve found is the proverbial it.

Sheldon Cooper, one of the protagonists of the TV sitcom The Big Bang Theory, agonises over the perfect spot available to sit in his living room. He considers the glare from the TV, heat sources in winter, breeze from open windows and the distances he’d have to walk to the door and the kitchen. After optimising for all of them, he picks a spot, calling it his (0,0,0,0) – the origin of his coordinate system.

Imagine a world bereft of any of these requirements. How would you determine where your (0,0,0,0) is? Without the stronger, more certain constraints typical of the world dominated by the gravitational force, the world of the really small – ruled by the other three forces and the quantum mechanics of particles – doesn’t offer such easy grip. Objectivity alone doesn’t help draw the lines here. There could be many (0,0,0,0)s, and just as many paths to them.

Of course, there are some facts, fixed and immutable and holding up a guiding light for theories marching in the darkness. And some theories do rise up to meet them, like a tangled mass of fairy lights resolving into clearer view when they’re strung between hooks. Obviously there’s a line somewhere in between the wave-function and the bridge where reality starts to make more sense.

But on both sides of this line, perhaps one side more than the other, there’s some jugaad at work that keeps the daydreaming idealism of objectivity at bay. The world is what’s we’ve made work – not something that fell into our collective lap. And that it doesn’t simply condense out of the fog of war is much the way fantasy works, too, although many of us find that easier to believe about science than fantasy itself.

Dredge this ‘make it happen’ mantra up to the macroscopic realm guided by classical physics and apply it to everything you see around you. It should be obvious then. The objects populating our view of reality are there because we’ve made them work. They’re not unique, they’re not irreplaceable. We use them – certain avatars of them – the way we do because they’re just what we need to make the world work the way we want it to.

The base-10 number system is an illustrative example. You’re so used to counting in multiples of 10 and finding it easier to remember 160 and 1,800 instead of 162 and 1,794 that you often don’t stop to think there are other ways to measure things – maybe even easier ones. Look at base-6: you already use it to read the time and convert between inches and feet. Instead of involving all your fingers at once, it takes up only one hand and one finger from the other at a time.

It’s the Copernican principle all over again: just like Earth isn’t the centre of the universe, carrying special or significant value simply by virtue of its location, the ways we’ve developed to study the universe aren’t especially meaningful simply by virtue of our choices. This is uniquely – if not especially – true with research exploring the smallest constituents of reality.

Humans’ relationship with science is humans’ relationship with fantasy as well. Internal consistency and coherence matter in fantasy as much as they do in science – and ‘anything goes’ is equally antithetical to both. Their proponents ensure this is the case by following some rules, identifying those mechanisms that keep these rules from being broken, and deploying them over and over in the investigation of new possibilities.

Most of all, we use only that which we need and discard the rest because it’s important that we make do (à la the jugaad of the imagination) to make it work. Physics just happens to be a more useful way to study the natural real because one of its rules is to submit to information gleaned from empirical interactions with the natural real. But what we’ve seen thus far of the foundations of physics should remind us that that doesn’t make it more virtuous.

Physics’s guiding lights lead us through one darkness. Fantasy simply, yet importantly, assails another. The greatest thing about fantasy fiction, distinct from all other forms, is that it allows us to create new worlds completely divorced from our own, and lets us make of it what we want to. And in populating these worlds, we’re confronted with a variety of choices familiarly distanced from objectivism – a moment in which we begin a journey inward, into the maze of our memories, aspirations and the human condition that we inhabit, inasmuch as the fundament of physics beckoned us on a journey towards a truth that existed outside of us.

It’s only important that they’re invoked in their respective domains and not outside of them – at least not to the extent that they interfere with each other’s purposes. Some creative thinking is important in physics as well, especially when you’re looking for your (0,0,0,0) with no constraints whatsoever, a.k.a. groping in the dark. And some physics is important in fantasy as well. Otherwise, why would dragons flap their wings?

John Horgan asked 15 people – scientists, social psychologists, philosophers – one question, in a seemingly clever effort to mark the end of 2018:

Unless you are too stoned or enlightened to care, you are probably dissatisfied with the world as it is. In that case, you should have a vision of the world as you would like it to be. This better world is your utopia. That, at any rate, is the premise of a question I’ve been asking scientists and other thinkers lately: What’s your utopia?

Some of the answers are insipid, others are quite revealing and most of them are somewhere in between. But look closer and you might notice that all of them engage with the possibilities in front of them largely on one of three levels: really personal (Solomon, Woit, Maudlin, Volk, Holt), from a great distance (Hossenfelder, Aaronson, Wolfram, Rees, Herbert) or in abstract terms (Chomsky, Dawkins, Deutsch, Hanson).

Some of them also talk about climate change and economic distress insofar as day-to-day issues are concerned. But by and large – with the exceptions of Hossenfelder and Aaronson – there seems to be no deeper reflection on sociopolitical issues, and whether the utopias they seek will make the world a better place for them alone or for all of us.

To be fair, it’s probably the format that doesn’t lend itself to lengthy analyses of our times, what exactly they’d like to improve, why and how they’d go about it. Most answers to Horgan’s question are pretty short; it would be fair to assume Horgan gave his interlocutors a small word-limit so that 15 such answers wouldn’t be that long a read. More importantly, the reason I want to cut the answers any slack is because all the people on the list are (or have been) smart cookies.

Slack for what, eh? At this point, look even closer and tell me you don’t find it odd that there’s just one woman in the list of 15 intellectuals, odder still that all men and women on the list are white people, and odder yet that they’re all from developed nations.

Now ask yourself whether this could be why none of the utopias seems concerned with issues that assail non-white, non-male, non-first-world scholars, that too not because they’re scholars but at a more essential level: because they’re non-white, non-male, non-first-world people. Apart from Hossenfelder and Aaronson (and maybe Chomsky), I don’t even find reason to believe that the intellectuals quoted were thinking of a world beyond their neighbourhood.

I’m aware my anger is more entropy than heat in this context. Horgan probably simply asked 15 famous people and requested they keep their replies short. The famous people responded, and Horgan compiled the responses into an interesting article for the Scientific American. The article isn’t going to change the world, influence leaders (I think) or contribute to governance and policymaking. It’s an interesting read is all it is. But even then it’s not okay that the list has zero cultural diversity and the absolute bare minimum of gender diversity.

If anything, the list could be useful as ‘Exhibit A’ in favour of those with the energy and articulacy to repeatedly push back against the dispiriting assertions of biologist-blogger Jerry Coyne. ICYMI, Coyne recently ridiculed a Princeton University course called ‘Science After Feminism’, which – among other things – proposes to answer two questions:

Is science gendered, racialized, ableist, and classist?

Does the presence or absence of women (and other marginalized individuals) lead to the production of different kinds of scientific knowledge?


These questions have come to symbolise a kind of detector. You hold it up to a person and, depending on how they answer, you can tell which of the following groups they belong to:

  1. ‘No’ and ‘no’ because there’s not evidence to back these claims up ⇒ you’re one of the devout quants who lives and dies in a data bubble, refusing to acknowledge the effect of cultural forces in our lives
  2. ‘No’ and ‘no’ because science is not the same as scientists ⇒ you’re one of the rationalists who believes science exists as an absolute truth incorruptible by the practice of some humans
  3. ‘Yes’ and ‘yes’ because science is meaningless outside of its practice ⇒ you’re one of the rationalists who believes science’s relationship with humans goes deeper than just being a source of knowledge

Coyne is of the second type. (His post even exemplifies the sort of pedantry the people of this group resort to in arguments.)

Horgan’s list goes to show what a difference the representation of non-male and marginalised members of society in the scientific enterprise can make. They don’t simply improve nominal diversity and affirmative action. More seriously, their inclusion influences what knowledge we do and don’t produce through time, and that in turn affects the power-relations within and between different societies. Coyne fails to see that while there could be a scientific ideal for each scientist to aspire towards, the history of science reveals that what we’ve known as science has been inseparable from the people we’ve called scientists at the time.

On December 18, Manmohan Singh took a jibe at Narendra Modi for not holding any press conferences in his term as PM. To get in on the action, 15 people at The Wire (including myself) pitched 15 questions we’d like to ask Modi if we ever got the chance. The full list is here to view.

My question, eleventh on the list, has been whittled down. Understandably so since the original text was 182 words long. If I’d asked it during a presser or wherever, I’d have sounded like one of those gasbags we love to hate – the guy hogging the mic, sounds like he may not actually have a question and is actually bragging about how much he knows.

With apologies for that, this is the question I’d like to ask Modi should I get the chance (as of this moment). I’ve presented it in full.

Many scientists and science academies have protested that lawmakers’ words and actions – including your own – are negating India’s efforts to improve scientific temper in society. Your government is increasing spending for ‘conventional’ science and for research on gaumutra at the same time.

Government-funded research on these projects presents neither accessible evidence for claims nor sources of data, and experiments don’t have any protocols to follow. This is especially dangerous in healthcare (ayurveda, BGR 34, homeopathy, etc.). On the other hand, your government constantly wants scientists to deliver more and win Nobel Prizes while also working towards “national priorities” that you refuse to set in stone.

Your ministers say absolute spending on R&D has been the highest in your term but forget that it’s an abysmal fraction (0.7%) of the GDP. All science departments received more money in the latest Union budget – but while the MST got a 6.1% hike, the AYUSH ministry got a 13% hike, and postdocs around the country have protested at least twice for better stipends.

How would you address these contradictions?

James English had a wonderful piece in Public Books recently, discussing how the Nobel Prize for literature:

  1. Is a prize that has always struggled to be meaningful, given how its laureates are shortlisted, the capital that incentivises its exercise and the historical Eurocentric elitism of its adjudicators
  2. Had been irreversibly diminished by the controversies surrounding Jean-Claude Arnault and his apologist Horace Engdahl, and the disgusting “horse trading” that followed (Sara Danius for Katarina Frostenson)
  3. Had only made itself more interesting by having had its inherent politics and drama exposed to the wider world (“The Nobel Prize in Literature thrived in the 20th century not despite eruptions of outrage over the judgments of the Swedish Academy but because of them”)

What will it take for everyone to see that the Nobel Prizes for physics, chemistry and medicine work the same way? And that they don’t have to be assailed by public controversies to be acknowledged as imperfect prizes, whose status was seeded by a similar, if not the same, “admixture of capitals”.

There’s nothing to these prizes if not their prestige. But while that’s something any prize should aspire to have, it’s the wider zeitgeist of the Nobel Prizes’ appreciation that makes them interesting. Perhaps, as English argues, accepting this brokenness could pave the way to a more culturally appropriate celebration of what the prizes stand for, one that doesn’t quietly raise its glass to traditionalism on every December 10.

At first glance, this tweets appears to state something obvious:

Of course the three-dimensional arrangement of links and nodes, and the space they occupy, influences the ultimate form of the network. But being a tweet, it doesn’t capture a pivotal detail in the paper: the scientists who authored it aren’t talking about the length of the links but the girth. The girth appears to have a nontrivial impact on the network to the extent that, if you discounted it, the network would look significantly different.

In fact, this paper presents some interesting consequences of link thickness that can’t be intuited from the first principles.

The paper’s authors are all network theorists themselves, and one of them is Albert-László Barabási, one of the world’s foremost experts on the topic.

Scientists study networks by organising their internal components as nodes and links. In the example of a triangle, there are three nodes – a.k.a. vertices – and three links – a.k.a. sides. In a tetrahedral network, there are four nodes and six links. Using this simple classification scheme, scientists have found that certain things about a network can be explained only by analysing it in terms of its connections, not by its overall geometry.

As a result, network theory looks at networks using some parameters that don’t exist in the study of other systems. These include the number of connections at a node, network centrality, link betweenness, etc. And because the geometric properties are no longer of interest, network theorists assume that the nodes are infinitesimally small and the links are one-dimensional.

For example, a 2013 study used network analysis to determine the performance of batsmen who played in the Ashes that year. Satyam Mukherjee, the study’s author, built a network where each node was a batsman and each link was a partnership between two batsmen. The link length scaled according to the number of runs they scored together.

In this setup, Mukherjee found that the in-strength parameter denoted a player’s contributions in partnerships; closeness, their ability to play in different positions in the batting lineup; and centrality, the degree of their involvement in partnerships. The Google PageRank algorithm could be used to determine each player’s overall importance.

Mukherjee found the following English players scored the highest on each count (England won the Ashes 3-0 that year):

  • Centrality – Ian Bell
  • Closeness – Matt Prior
  • In-strength – Jonathan Trott
  • PageRank – Graeme Swann

However, the current paper argues that by ignoring the geometric characteristics of the links and nodes, theorists might have been missing out on an important feature that determines why networks take the forms that they do. This may not apply to analysing cricketing statistics problem but it certainly does to “neurons in the brain, three-dimensional integrated circuits and underground hyphal networks,” where “the nodes and links are physical objects that cannot intersect or overlap with each other.”

The one geometric characteristic found to have a defining influence on the ways in which the network was and wasn’t allowed to grow was the link’s width.

Using computer simulations, the scientists found that there was a link thickness threshold. Below the threshold – with small link width, called the weakly interacting regime – the network was able to keep links from crossing each other by small rearrangements that didn’t affect its overall form. Above the threshold, in the strongly interacting regime, the link thickness began to have an effect on link length and curvature, and the links also became more closely packed together.

This is fascinating in two ways.

First: The study shows that the way the network grows depends not just on which nodes it wants to connect but also on how they’re connected. This could mean, for example, that network theorists will have to factor in the physical properties of materials involved in link-building to fully understand the network itself.

Second: Link thickness also affects the space between links, with links placed closer to each other as they become thicker. As a result, networks in the strongly interacting regime will be harder to construct using 3D-printing than those in the weakly interacting regime, in which links and nodes are more clearly separated.

Where does the threshold itself lie? It’s not a single, fixed link-width value such that the network properties on either side of it are markedly different. Instead, it’s more like a transition that occurs across a predictable range of values. In general terms, the threshold is the zone where the total volume occupied by the links approaches the total volume occupied by the nodes.

One chart from the paper (below) visualises it well. The first box on top shows the number of links that cross each other as link thickness increases. Note the box below it, which shows how the average link length changes as link thickness increases.

Source: https://doi.org/10.1038/s41586-018-0726-6

The colours represent two different network models. Orange lines denote a network in the elastic-link model, where the nodes’ positions are fixed and the links are free to move. The blue lines denote a network in the fully elastic model, in which both nodes and links can move freely.

To quote from the paper:

We determine the origin of the transition in the geometry of the networks by estimating the transition point rLc. When the links are much thinner than the node repulsion range rN, the layout is dominated by the repulsive forces between the nodes, which together occupy the volume VN = 4√2NrN3/3. When the volume occupied by the links becomes comparable to VN, the layout must change to accommodate the links. This change induces the transition from the weakly interacting regime to the strongly interacting regime.

The interestingness parade from this study has one more float, which draws from the bottommost box in the chart above. The scientists found that a network behaved more like a solid in the weakly interacting regime and like a gel in the strongly interacting one. This isn’t immediately evident because thicker links usually suggest higher robustness and structural rigidity. However, because they also influence link length and curvature, the network responds differently to external (physical) forces compared to networks with more slender, straighter links.

Specifically, the corresponding stress response was measured using the Cauchy stress tensor – a matrix of nine numbers used to calculate the stress at a single point in three dimensional space. In the weakly interacting regime, the stress response was dominated by node-node and link-link interactions, and the network prevented the stress due to the external force from spreading evenly in all directions. This is a feature of solid materials.

Similarly, in the strongly interacting regime, the stress spreads more uniformly through the network because the volume is dominated by links. So the stress response is dictated by the elastic properties of individual links and by link-link interactions, mimicking those of gels.

In sum, as a network transitions from the strongly interacting to the weakly interacting regimes, the link length begins to increase faster, links become more curved, and the network begins to behave more like a gel even as it becomes less amenable to 3D-printing.