Pandemic: A world-building exercise

First, there was light news of a vaccine against COVID-19 nearing the end of its phase 3 clinical trials with very promising results, accompanied with breezy speculations (often tied to the stock prices of a certain drug-maker) about how it’s going to end the pandemic in six months.

An Indian disease-transmission modeller – of the sort who often purport to be value-free ‘quants’ interested in solving mathematical puzzles that don’t impinge on the real world – reads about the vaccine and begins to tweak his models accordingly. Soon, he has a projection that shines bright in the dense gloom of bad news.

One day, as the world is surely hurtling towards a functional vaccine, it becomes known that some of the world’s richest countries – representing an eighth of the planet’s human population – have secreted more than half of the world’s supply of the vaccine.

Then, a poll finds that over half of all Americans wouldn’t trust a COVID-19 vaccine when it becomes available. The poll hasn’t been conducted in other countries.

A glut of companies around the world have invested heavily in various COVID-19 vaccine candidates, even as the latter are yet to complete phase 3 clinical trials. Should a candidate not clear its trial, a corresponding company could lose its investment without insurance or some form of underwriting by the corresponding government.

Taken together, these scenarios portend a significant delay between a vaccine successfully completing its clinical trials and becoming available to the population, and another delay between general availability and adoption.

The press glosses over these offsets, developing among its readers a distorted impression of the pandemic’s progression – an awkward blend of two images, really: one in which the richer countries are rapidly approaching herd immunity while, in the other, there is a shortage of vaccines.

Sooner or later, a right-wing commentator notices there is a commensurately increasing risk of these poorer countries ‘re-exporting’ the virus around the world. Politicians hear him and further stigmatise these countries, and build support for xenophobic and/or supremacist policies.

Meanwhile, the modeller notices the delays as well. When he revises his model, he finds that as governments relax lockdowns and reopen airports for international travel, differences in screening procedures in different countries could allow the case load to rise and fall around the world in waves – in effect ensuring the pandemic will take longer to end.

His new paper isn’t taken very seriously. It’s near the end of the pandemic, everyone has been told, and he’s being a buzzkill. (It’s also a preprint, and that, a senior scientist in government nearing his retirement remarks, “is all you need to know”.) Distrust of his results morphs slowly into a distrust towards scientists’ predictions, and becomes ground to dismiss most discomfiting findings.

The vaccine is finally available in middle- and low-income countries. But in India, this bigger picture plays out at smaller scales, like a fractal. Neither the modeller nor the head of state included the social realities of Indian society in their plans – but no one noticed because both had conducted science by press release.

As they scratch their heads, they also swat away at people at the outer limits of the country’s caste and class groups clutching at them in desperation. A migrant worker walks past unnoticed. One of them wonders if he needs to privatise healthcare more. The other is examining his paper for arithmetic mistakes.

The number of deaths averted

What are epidemiological models for? You can use models to inform policy and other decision-making. But you can’t use them to manufacture a number that you can advertise in order to draw praise. That’s what the government’s excuse appears to be vis-à-vis the number of deaths averted by India’s nationwide lockdown.

When the government says 37,000 deaths were averted, how can we know if this figure was right or wrong? A bunch of scientists complained that the model wasn’t transparent, so its output had to be taken with a cupful of salt. But as an article published in The Wire yesterday noted, these scientists were asking the wrong questions – that the number of deaths averted is only a decoy.

So say the model had been completely transparent. I don’t see why we should still care about the number of deaths averted. First, such a model is trying to determine the consequences of an action that was not performed, i.e. the number of people who might have died had the lockdown not been imposed.

This scenario is reminiscent of a trope in many time-travel stories. If you went back in time and caused someone to do Y instead of X, would your reality change or stay the same considering it’s in the consequent future of Y instead of X? Or as Ronald Bailey wrote in Reason, “If people change their behaviour in response to new information unrelated to … anti-contagion policies, this could reduce infection growth rates as well, thus causing the researchers to overstate the effectiveness of anti-contagion policies.”

Second, a model to estimate the number of deaths averted by the lockdown will in effect attempt to isolate a vanishingly narrow strip of the lockdown’s consequences to cheer about. This would be nothing but extreme cherry-picking.

A lockdown has many effects, including movement restrictions, stay-at-home orders, disrupted supply of essential goods, closing of businesses, etc. Most, if not all, of them are bound to exact a toll on one’s health. So the number of deaths the lockdown averted should be ‘adjusted’ against, say, the number of people who couldn’t get life-saving surgeries, the number of migrant labourers who died of heat exhaustion, the number of TB patients who developed MDR-TB because they couldn’t get their medicines on time, even the number of daily-wage earners’ children who died of hunger because their parents had no income.

So the only models that can hope to estimate a meaningful number of deaths averted by the lockdown will also have simplified the context so much that the mathematical form of the lockdown will be shorn of all practical applicability or relevance – a quantitative catch-22.

Third, the virtue of the number of deaths averted is a foregone conclusion. That is, whatever its value is, it can only be a good thing. So as an indisputable – and therefore unfalsifiable – entity, there is nothing to be gained or lost by interrogating it, except perhaps to elicit a clearer view of the model’s innards (if possible, and only relative to the outputs of other models).

Finally, the lockdown will by design avert some deaths – i.e. D > 0 – but D being greater than zero wouldn’t mean the lockdown was a success as much D‘s value, whatever it is, being a self-fulfilling prophecy. And since no one knows what the value of D is or what it ought to be, even less what it could have been, a model can at best come up with a way to estimate D – but not claim a victory of any kind.

So it would seem the ‘number of deaths averted’ metric is a ploy disguised as a legitimate mathematical problem whose real purpose is to lure the ‘quants’ towards something they think challenges their abilities without realising they’re also being lured away from the more important question they should be asking: why solve this problem at all?

The Wolfram singularity

I got to this article about Stephen Wolfram’s most recent attempt to “revolutionise” fundamental physics quite late, and sorry for it because I had no idea Wolfram was the kind of guy who could be a windbag. I haven’t ever had cause to interact with his work or his software company (which produced Wolfram Mathematica and Wolfram Alpha), so I didn’t know really know much about him to begin with. But I expected him, for reasons I can’t explain, to be more modest than he comes across as in the article.

The article was prompted in the first place by a preprint paper Wolfram and a colleague published earlier this year in which they claimed they had plotted a route to a fundamental theory of everything. Physics currently explains the universe with a combination of multiple theories that don’t really fit together. A ‘theory of everything’ is the colloquial name of a universal theory that many physicists argue exist and which could explain everything about the universe in a self-consistent manner.

Wolfram’s preprint paper was startling as things go not because of its substance but because a) he made no attempts to engage with the wider community of physicists that has been working on the same problem for decades, and b) for Wolfram’s insistence that those dismissing its conclusions are simply out to dismiss him. Consider the following portions:

“I do fault myself for not having done this 20 years ago,” the physicist turned software entrepreneur says. “To be fair, I also fault some people in the physics community for trying to prevent it happening 20 years ago. They were successful” [emphasis added].

“The experimental predictions of [quantum physics and general relativity] have been confirmed to many decimal places—in some cases, to a precision of one part in [10 billion],” says Daniel Harlow, a physicist at the Massachusetts Institute of Technology. “So far I see no indication that this could be done using the simple kinds of [computational rules] advocated by Wolfram. The successes he claims are, at best, qualitative.” …

“Certainly there’s no reason that Wolfram and his colleagues should be able to bypass formal peer review,” Katie Mack says. “And they definitely have a much better chance of getting useful feedback from the physics community if they publish their results in a format we actually have the tools to deal with.”

Reading of this attitude brought to mind an episode from six or seven weeks ago, after a pair of physicists had published a preprint paper modelling the evolution of the COVID-19 epidemic in India and predicting that multiple lockdowns instead of just one would work better. The paper was one of many that began to show up around that time, each set of authors fiddling with different parameters according to their sense of the world to reach markedly different conclusions (a bit of ambulance-chasing if you ask me).

The one by the two physicists was singled out for bristling criticism by other physicists because – quite like the complaints against Wolfram – their paper allegedly described a model that seemed to be able to reach any conclusion if you tweaked its parameters enough, and because the duo hadn’t clarified this and other important caveats in their interviews to journalists.

Aside 1 – In physics at least, it’s important for theories to be provable in some domains and falsifiable in others; if a theory of the world is non-falsifiable, it’s not considered legitimate. In Wolfgang Pauli’s famous words, it becomes ‘not even wrong’.

Aside 2 – Incidentally, Harlow – quoted above from the article – was one of the physicists defending physicists’ freedom to model what they will but agreed with the objection that they also need to be honest with journalists about their assumptions and caveats.

In a lengthy Facebook discussion that followed this brouhaha, someone referred to a Reddit post created three days earlier in which a physicist appealed to his peers to stop trying to model the pandemic – in his words, to “cut that shit out” – because a) no physicist could hope to do a better job than any other trained epidemiologist, and b) every model a physicist attempted could actually harm lives if it wasn’t done right (and there was a good chance it was at least incomplete).

Wolfram is guilty of the same thing: his preprint paper won’t harm lives, but the mortal threat is the only thing missing from his story; it’s otherwise rife with the same problems. His hubristic remark in the article’s denouement – that he deserves “better” questions than the ones other physicists were asking him in response to his “revolutionary” paper – indicates Wolfram thinks he’s done a great job but it’s impossible to see people like him as anything more than windbags convinced of their intellectual superiority and ability to singlehandedly wrestle hideously intractable problems to the ground. I, and likely other editors as well, have glimpsed this attitude on the part of some authors who dismiss criticism of their pieces as criticism of anything but their unclear writing, and some others who refuse to be disabused of a conviction that their conclusion is particularly fascinating.

I’d like to ask Wolfram what I’d like to ask these people as well: What have you hit on that you think others haven’t in all this time, and why do you think all of them missed it? Granted, everyone is allowed their ‘eureka’ moment, but anyone who claims it on the condition that he not be criticised is not likely to be taken seriously. More importantly, he may not even deserve to be taken seriously if only because, to adapt Mack’s line of reasoning, he undermines the very thing on which modern science is founded, the science he claims to be improving: processes, not outcomes; involving communities, not individuals.

Featured image credit: Anna Shvets/Pexels.