The last two episodes of the second season of House, the TV series starring Hugh Laurie as a misanthropic doctor at a facility in Princeton, have been playing on my mind off and on during the COVID-19 pandemic. One of its principal points (insofar as Dr Gregory House can admit points to the story of his life) is that it’s ridiculous to expect the families of patients to make informed decisions about whether to sign off on a life-threatening surgical procedure, say, within a few hours when in fact medical workers might struggle to make those choices even after many years of specific training.
The line struck me to be a chasm stretching between two points on the healthcare landscape – so wide as to be insurmountable by anything except magic, in the form of decisions that can never be grounded entirely in logic and reason. Families of very sick patients are frequently able to conjure a bridge out of thin with the power of hope alone, or – more often – desperation. As such, we all understand that these ‘free and informed consent’ forms exist to protect care-providers against litigation as well as, by the same token, to allow them to freely exercise their technical judgments – somewhat like how it’s impossible to physically denote an imaginary number (√-1) while still understanding why they must exist. For completeness.
Sometimes, it’s also interesting to ask if anything meaningful could get done without these bridges, especially since they’re fairly common in the real world and people often tend to overlook them.
I’ve had reason to think of these two House episodes because one of the dominant narratives of the COVID-19 pandemic has been one of uncertainty. The novel coronavirus is, as the name suggests, a new creature – something that evolved in the relatively recent past and assailed the human species before the latter had time to understand its features using techniques and theories honed over centuries. This in turn predicated a cascade of uncertainties as far as knowledge of the virus was concerned: scientists knew something, but not everything, about the virus; science journalists and policymakers knew a subset of that; and untrained people at large (“the masses”) knew a subset of that.
But even though more than a year has passed since the virus first infected humans, the forces of human geography, technology, politics, culture and society have together ensured not everyone knows what there is currently to know about the virus, even as the virus’s interactions with these forces in different contexts continues to birth even more information, more knowledge, by the day. As a result, when an arbitrary person in an arbitrary city in India has to decide whether they’d rather be inoculated with Covaxin or Covishield, they – and in fact the journalists tasked with informing them – are confronted by an unlikely, if also conceptual, problem: to make a rational choice where one is simply and technically impossible.
How then do they and we make these choices? We erect magic bridges. We think we know more than we really do, so even as the bridge we walk on is made of nothing, our belief in its existence holds it up and stiff beneath our feet. This isn’t as bad as I’m making it seem; it seems like the human thing to do. In fact, I think we should be clearer about the terms on which we make these decisions so that we can improve on them and make them better.
For example, all frontline workers who received Covaxin in the first phase of India’s vaccination drive had to read and sign off on an ‘informed consent’ form that included potential side effects of receiving a dose of the vaccine, its basic mechanism of action and how it was developed. These documents tread a fine line between being informative and being useful (in the specific sense of the risk of debilitating action by informing too much and of withholding important information in order to skip to seemingly useful ‘advice’): they don’t tell you everything they can about the vaccine, nor can they assert the decision you should make.
In this context, and assuming the potential recipient of the vaccine doesn’t have the education or training to understand how exactly vaccines work, a magic bridge is almost inevitable. So in this context, the recipient could be better served by a bridge erected on the right priorities and principles, instead of willy-nilly and sans thought for medium- or long-term consequences.
There’s perhaps an instructive analogy here with software programming, in the form of the concept of anti-patterns. An anti-pattern is a counterproductive solution to a recurrent problem. Say you’ve written some code that generates a webpage every time a user selects a number from a list of numbers. The algorithm is dynamic: the script takes the user-provided input, performs a series of calculations on it and based on the results produces the final output. However, you notice that your code has a mistake due to which one particular element on the final webpage is always 10 pixels to the left of where it should be. Being unable to identify the problem, you take the easy way out: you add a line right at the end of the script to shift that element 10 pixels to the right, once it has been rendered.
This is a primitive example of an anti-pattern, an action that can’t be determined by the principles governing the overall system and which exists nonetheless because you put it there. Andrew Koenig introduced the concept in 1995 to identify software programs that are unreliable in some way, and which could be made reliable by ensuring the program conforms to some known principles. Magic bridges are currently such objects, whose existence we deny often because we think they’re non-magical. However, they shouldn’t have to be anti-patterns so much as precursors of a hitherto unknown design en route to completeness.