Credit: mwewering/pixabay

Remember that paper about cognitive flexibility and nationalism? The one that said people who are more nationalistic in their politics tend to have lower cognitive flexibility? I’d blogged about it here. I hadn’t read the study’s paper, published in the Proceedings of the National Academy of Sciences, because I didn’t think I had to to be able to call the study’s conclusions into question. An excerpt from my previous post:

… ideological divisions, imagined in the form of political polarisation, are bad enough as it is without people on one side of the aisle being able to accuse those on the other side of having “low cognitive flexibility”. The nuance can be worded as prosaically as the neuroscientists would prefer but this won’t – can’t – stop the less-nationalistic from accusing the more-nationalistic of simply being stupid, now with a purported scientific basis.

This is why I believe something has to be off about the study. The people on the right, as it were on the political spectrum, are not stupid. They’re smart just the way those of us on the left imagine ourselves to be. Now, one defence of the study may be that it attempts to map a hallmark feature of the global political right, sort of a rampant anti-intellectualism and irrationality, to its neurological underpinnings – but nationalism is more than its endorsement of traditions or traditional values.

As it turned out: if I had, the paper would have revealed more problems with the study and made a stronger case than I was able that it is quite likely a product of the “publish or perish” kind of thinking. The reason I revisit this study now is an interesting conversation I had with Shruti Muralidhar, a cortical and hippocampal neuroscientist, currently a postdoc at the Massachusetts Institute of Technology. Before I’d written my post, I’d asked Shruti if she could read the paper and possibly critique it.

My primary concern was basically about assigning a kind of “hierarchy of cognitive abilities” to the political spectrum – that sounds dangerous. By saying the political right has less cognitive flexibility, I’d felt like the study was reaching the conclusion that there might be a purely biological explanation for why people behave the way they do. This kind of reductionism is eminently dangerous.

According to Shruti, “This understanding of the paper is not far from what they want the reader to take away – but sadly, they have little or no backing to actually prove or disprove this claim.” She summed her observation up in a few points (quoted verbatim):

  1. Cognitive flexibility is simply just that. It doesn’t mean more or less intelligence, smarts or anything like that. In fact, it might not even be a “positive” trait depending on the situation at hand.
  2. The study’s authors have administered only two cognitive tests, and one of them clearly gives counterintuitive, unexpected or, one might even say, “wrong” results, as in goes against the study’s primary hypothesis.
  3. These are correlation studies, which usually are to be taken with a bagful of salt.

The first question that arises then is why the authors – or PNAS – decided to publish their study when 50% of their tests turned in results that opposed their hypothesis: that the more nationalistic are less flexible, cognitively speaking.

She also pointed out many issues with the language in the paper, especially lines that could be misinterpreted easily. Some in particular stuck out because they revealed a deeper epistemological issue with the study.

Shruti said, “The authors clearly admit that cognitive flexibility is a multi-dimensional beast and that it is difficult to understand it completely,” and often suggest that they don’t understand it completely themselves. One giveaway is that they keep saying variations of “We need more and better tests”.

A bigger giveaway is a line on page 6: “However, it is also conceivable that immersing oneself in strongly ideological environments may encourage psychological inflexibility and promote a preference for routines and traditions.” In other words, if A stood for “more nationalistic” and B for “less cognitive flexibility”, then the authors were saying that A therefore B while also admitting that B therefore A. In other other words, their correlation was in doubt, leave alone causation. This portion concludes thus:

Nevertheless, more research is necessary to understand the nature of cognitive flexibility and the various ways in which it manifests in relation to ideological thinking.

The authors haven’t defined cognitive flexibility explicitly in their paper, instead referencing older studies on the subject. Even so, Shruti said that those papers might not be able to provide the final word either because, as one of her peers had pointed out, “Since this study is EU/Britain-specific, their idea of what ideological inflexibility is might also be different from, say, India’s or the rest of the world’s. Europe thrived on systems and thinking-within-the-box for centuries.”

All together, the paper appears to describe a study of the “low-hanging fruit” variety. Its central hypothesis has been neither proved nor disproved, the reader is left in doubt about whether the tests were properly chosen (and why more tests weren’t performed), and the paper is strewn with admissions that the authors don’t claim to understand what one of the more important keywords in the study really means.

Worst of all (to me) is that the paper has been published with a misleading headline and the university press release, with an incredibly misleading one that should take all the responsibility for fake news born as a result (and strengthening the case that they shouldn’t be trusted). And there’s quite a bit of it:

  • PsychCentral has an article that only quotes Leor Zmigrod, the lead author of the atudy and a psychologist at the University of Cambridge
  • The same is true of an article in The Guardian by Nicola Davis. The headline goes ‘Brexiters tend to dislike uncertainty and love routine, study says’ – more of the reductionism at work.
  • Andrew Brown of The Guardian takes the study’s conclusions at face value, writing in his column:

… some kinds of political argument are going to be literally interminable. Obviously this isn’t true of any particular issue. Even the question of our relations with Europe will be settled some time before the heat death of the universe. But it may be replaced by something else which arouses the same passions and splits the population in the same way, because the cognitive traits [Zmigrod] is analysing are all part of the normal variation of humanity.

In fact, it seems no prominent coverage of the paper has invited an independent researcher to comment on its findings. I concede that I myself didn’t speak to a psychologist – Shruti is a neuroscientist – but all of Shruti’s observations are hard to ignore.

Finally, if I were looking to publish a paper right now, I’d hypothesise that flattering, non-critical coverage of scientific papers – peer-reviewed or otherwise – is more common among news publishers if each paper makes it easier for the publication to maintain its political position.

Featured image credit: mwewering/pixabay.


My feeling is that as far as creativity is concerned, isolation is required. … The presence of others can only inhibit this process, since creation is embarrassing.

– Isaac Asimov (source)

Be it far from me to fall for a behavioural studies paper that’s not yet been replicated, and much farther to do so based on a university press release, but this one caught my attention because it suggests something completely opposite to my experience: “when there’s an audience, people’s performance improves”. Sure enough, four full paras into the piece there’s a qualification:

Vikram Chib, an assistant professor of biomedical engineering at Johns Hopkins … who has studied what happens in the brain when people choke under pressure, originally launched this project to investigate how performance suffers under social observation. But it quickly became clear that in certain situations, having an audience spurred people to do better, the same way it would if money was on the line. (emphasis added)

The situation in question involved 20 participants playing a videogame in front of an audience of two and, in a different ‘act’, in front of no audience at all. If a participant played the game better, he/she received a higher reward. Brain activity was monitored at all times using an fMRI machine.

You realise now that the press release’s headline is almost criminally wrong, considering it’s likely been vetted by some scientists if not those who conducted the study itself. It suggests that people’s performance improves in all circumstances; however, a videogame is nothing like writing, for example. In fact, you’d be hard-pressed to find someone who can write when they’re being watched. This is because writing isn’t a performance art whereas a videogame could be. And when executing a performance, having an audience helps.

According to Chib and the press release, this is the mechanism of action:

When participants knew an audience was watching, a part of the prefrontal cortex associated with social cognition, particularly the thoughts and intentions of others, activated along with another part of the cortex associated with reward. Together these signals triggered activity in the ventral striatum, an area of the brain that motivates action and motor skills.

While this is interesting, 20 people isn’t too much, the task is too simple and definitely not generalisable, and the audience is too small. Playing a videogame in front of two strangers (presumably) is nothing like playing a videogame in a room chock full of people, or when the stakes are higher. In fact, in real life, you’re almost certainly being judged if there’s an audience watching you as you conduct a task, and your stress levels are going to be far higher than when you’re playing something on your Xbox in front of two people.

A final quibble is more a wondering about the takeaway. The study seems to have focused on a very narrowly defined task while one of its authors – Chib – freely acknowledges its various shortcomings. Why weren’t these known issues addressed in the same paper instead of angling for a follow-up? I suspect future studies will also perform the same experiment multiple times with different kinds of tasks.

But if the audience was a lot bigger, and the stakes higher, the results could have gone the other way. “Here people with social anxiety tended to perform better,” Chib said, “but at some point, the size of the audience could increase the size of one’s anxiety but we still need to figure that out.”

Perhaps this is a case of someone trying to jack up their publication count.

Featured image credit: Skitterphoto/pixabay.

In early 2015, I developed an unlikely hobby: tinkering around with hosting solutions on the web, specifically providers of infrastructure as a service (IaaS). It’s unlikely because it’s not something I consciously inculcated; it just happened. Three years later, this hobby has morphed into a techno-garden of obsessions that I tend to on the side, in between the hours of my day-job editing science pieces.

When in college, I worked a little with Google App Engine – a BaaS (backend as a service) popular at the time for hosting apps but not so much now. I followed that up with Linode in 2012 after Posterous shut down, and then Digital Ocean in 2015.

Linode and Digital Ocean both provide virtual private servers (VPSs). A VPS is a virtual server installed on a physical server that utilises a specified fraction of the server’s resources. For example, one of Digital Ocean’s ‘popular’ VPS configurations comes with 4 GB RAM, 80 GB SSD and 4 TB bandwidth (for $40/mo). Another VPS config has 2 GB RAM, 50 GB SSD and 2 TB bandwidth (for $20/mo). Both these VPSs could be running on the same physical server, with a type of software called a hypervisor installed on it to partition and manage VPSs according to users’ requirements.

Other options include shared hosting, where you have access to a part of a server’s resources (RAM, SSD and bandwidth) but not full control over how you use them. This is encapsulated by saying you don’t have root-level access. Shared hosting is preferred for small blogs and websites because it’s low-priced (starting at ~$3/mo). Then there’s bare-metal hosting, whereby you take charge of an entire server and all its resources.

Digital Ocean was godsend because of the one-click installs it provided. You purchase a VPS config – a.k.a. provision a VPS – such that it comes pre-installed with software of your choice, chosen from a menu. The Digital Ocean UI made the offering look much less like the intimidating cPanel and more like a fun testing area, considering VPSs were available for just $5. I think that’s how my interest truly took off.

Thanks to Digital Ocean, I was able to quickly learn the basics of working with cloud-computing, SSH, Linux-like operating systems, security auditing, webservers, content delivery networks, VPNs, firewalls, SSL/TLS and APIs. I don’t think the whole enterprise cost me more than $10. Additionally, both Digital Ocean and Linode offer excellent documentation; if you don’t find answers there, you will at stackoverflow. So there’s really no excuse to not start learning these things right away, especially if you’re so inclined.

Actually, you should probably pick up on these things even if you’re not so inclined because these are the basic technologies through which humanity engages the Information Age’s most powerful medium of communication: the internet. Their architecture, technical specifications and functional affordances make up the framework in which we conduct our techno-politics. What they allow us to do become freedoms and violations; what they don’t allow us to do become safeguards and restrictions.

Extending the importance of understanding how they work to one higher level of abstraction – we have the foundations of online commerce, digital art and information sharing protocols. Going even further, we start to bump into questions about memory, persistence, intelligence and immortality.

Every one of us is situated somewhere on this beanstalk, and with each passing day, there are fewer ways as well as fewer reasons to get off. (Even those who reject the internet must engage with it – either to implement their rejection or to engage with others who continue to use the internet.) As one developer wrote:

Ignoring the cloud or web services because they are out of your comfort zone is no longer an option. The app economy is shifting. Adapt or die.

As I tried to learn more about how these technologies impacted our daily lives – an nth or zeroth level of abstraction depending on your POV – I also realised the world’s foremost interpreters of the internet’s implications were white men. They’re too numerous to list but sample these authors of my bookmarked blogs: Sam Altman, Marco Arment, Andy Baio, John Gruber, Jason Kottke, Jay Rosen, Bruce Schneier, Ben Thompson and Jeffrey Zeldman, among others.1 Even to begin to decide whether the privilege enjoyed by this coterie biases their aspirations vis a vis the internet, you will need to pick up the basics.

Fortunately, the cost of acquiring this knowledge has been falling. Tending to my garden of obsessions has meant surfing the interwebs for different IaaS providers for hours on end, the various features they offer (usually the same but every once in a while something new comes up) and – interestingly – comparing their Terms of Service. During one such excursion recently, I came upon two great forums: LowEndTalk (LET) and WebHostingTalk (WHT). If you’re looking for cheap but reliable hosting providers, especially of the shared or VPS variety, LET and WHT have got you covered.

For example, this is how I came upon some hosts – esp. RamNode, KnownHost, WebFaction and SecureDragon – that provide infra at costs you will find not low but altogether “cheap” if you’re coming in from the world of Amazon Web Services, Microsoft Azure, etc. If you’ve picked up the basics of server management and security, the prices drop further (sub-$5). Even managed WordPress hosting hasn’t been spared; compare the prices of LightningBase and, say, Pressable.

(Managed hosting is a form of shared hosting where the hosting provider manages an application installed on the machine for the user, such that the user will have to be concerned only with using the application rather than also maintaining it. WordPress is a popular application for which managed options are abundant because those who use WordPress often have a very different skillset than that required to maintain WordPress.)

In all, you will need to spend about five hours a week for a month and a total of $10 to unlock a whole new, and very socially and politically relevant, world. If you want to do more, check out Slashdot Deals for amazing learning ‘bundles’.


1. The only exception I’ve been able to think of is Om Malik. Then again, all the white people I’ve mentioned, and Malik, are also all American, and perhaps I’m focusing on American interpretations of the interet’s implications. I can think of a few people who operate out of India – Pranesh Prakash, Srinivas Kodali, Malavika Jayaram, Anuj Srivas, Kiran Jonnalagadda – but none of them are recognised worldwide whereas the white men all are. This, of course, isn’t surprising.

An image from Yuri Shwedoff's 'Space' series. Credit: Yuri Shwedoff

I found this evocative image on Twitter today. It’s by a Russian artist named Yuri Shwedoff and the image is part of his ‘Space Series’, available to view and appreciate on Behance. I don’t know the provenance of the overlaid text though.

At a glance, it’s clear the image depicts a future where we’ve abandoned all space launches and have regressed to a more primitive form of life.

But then you realise the last NASA Space Shuttle launch was in July 2011. Perhaps some kind of Space Shuttle museum became abandoned as the world carried on? Doesn’t seem likely – the artist probably chose to depict the Space Shuttle because everyone recognises it.

Further, the rectangular beam-like structure below the Space Shuttle indicates the location is the Kennedy Space Centre Launch Complex 39A.

Another interesting feature is that the fuel tanks of earlier rockets had thinner walls than they do today, so the tank could be erected to an upright position only after being loaded with fuel and pressurised. So in this image, the Space Shuttle was ready for launch, and not just standing there waiting to be prepared for launch.

The crenellated mounds of earth and flora also suggest the 39A launchpad, with the rocket on it, has been abandoned for many centuries.

The weather is also curious because launchpads are usually located at sites above which there is often clear sky. But in this image, the sky is overcast. It could just be a rainy day – or it could be that the world has experienced some kind of catastrophe that has either precipitated weird weather patterns or, in the more dystopian view, clouded all of Earth á la a nuclear holocaust.

The greater catastrophe would also explain the primitive nature of technology in the image, in the form of a human riding horseback with what seems like arrows strapped to his back. The text, “It’s who we were…”, also suggests the same thing.

In all, the artist seems to say that in the early 21st century, something happened that caused us to abandon space launches, altered the world’s weather and, in time, left us technologically backward.

This is why I think the image is a bit confused. Gazing up at a Space Shuttle on the launchpad and saying “It’s who we were…” says nothing at all because, in a world with frequent spacefaring missions, something happened anyway. Our ambitions unto the final frontier didn’t change anything.

If anything, this accidental monument should’ve been for the now-hollow nuclear missile launch silo, or in fact a statue of a human itself.

Alternatively, I’d replace “It’s who we were…”, and its inherent sense of pride and longing, with a phrase that evokes shame and regret: “It’s who we will always be”.

(The original image by Shwedoff doesn’t have the text, so whoever put it on there has effectively defaced the image.)

K. VijayRaghavan, India’s new principal scientific advisor to the Government of India, has brought a lot of hope with him into the role as a result of his illustrious career as a biologist and former secretaryship with the Department of Biotechnology. Many stakeholders of the scientific establishment are already looking to him for positive changes in S&T policy, funding and administration in India under a government that, on matters of research and education, has focused on applications, translational research and actively ignored the spread of superstitious ideas in society.

In a recent interview, VijayRaghavan was asked about R&D funding in India. His response is worth noting against the backdrop of a ‘March for Science’ planned across India on April 14. As the interviewer reminds the reader, the 2018 Economic Survey bluntly acknowledged that India was underspending on research. This has also been one of the principal focus areas of the ‘March for Science’ organisers and participants: they have demanded that the Centre hike R&D spending to 3% and education spending to 10%, both as fractions of the GDP, apart from asking the government to stop the spread of superstitious beliefs.

Q: Getting funding for research is widely considered to be a prickly issue. The 2018 Economic Survey stated that India underspends on R&D. Is this a concern at the administration level?

A: These are wrongly posed questions, because it says that should magically the amount of funding go up, then science’s problems would be solved. Or that this is the key impediment. There’s no questions that there’s a correlation between increased R&D funding and innovation in many economies. South Korea is a striking example how high-tech R&D has resulted in transformation in their industries… Have we analysed, bottom-up, what Korea’s spending goes into and what we can learn from that and do afresh? Have we analysed our contest and learnt? …

Now interestingly, top-down this analysis has been done long ago. We as scientists, individuals and as journalists need to see that. The DST, and the DBT, the CSIR, the ICMR all have their plans should they get more resources. You can’t have a top-down articulation of how the resources can come and be used, unless that is also dynamically connected bottom-up.

When I look at 100 cases of why fund-flow is gridlocked, in about 70 cases, it’s poor institutional processes.

March for more than science

After the first Indian ‘March for Science’ happened in August 2017, the government showed no signs of having heard the participants’ claims, or even acknowledged the event. This was obviously jarring but it also prompted conversations about whether the march’s demands were entirely reasonable. Most news reports, include The Wire‘s, had focused on how this was the first outpouring of scientists, school-teachers and students, particularly at this scale. Scrutinising it deeply was taboo because there was some anxiety about jeopardising the need for such a march itself. However, ahead of the second march planned for April 14, it’s worth revisiting.

Sundar Sarukkai, the philosopher, had penned an oped the day after the 2017 march, asking scientists whether they had thought to climb down from their ivory towers and consider that the spread of superstitions in society under the Narendra Modi government may have been because of sociological and cultural reasons, and wasn’t simply a matter of spending more on R&D. Following a rebuttal from Rahul Siddharthan, Sarukkai clarified in The Wire:

Whenever ideal images are constructed (like ideal of woman, ideal of nation, etc.), one should be wary, since any such act is often driven by considerations of power. This ideal image of science too is used to establish science as a powerful agent within modern societies. The use of this ideal image to solve social problems related to caste, religion or hatred of any kind is a red herring. It is like using a hammer to fix a bulb. When we do that, it only means that we are not really interested in solving the problem (fixing the bulb) but more invested in using the method (the hammer) – irrespective of whether it is suitable for the task or not.

The terrible cases of lynching, hatred, oppression and misuse of religion must be unequivocally opposed. For those who are serious about that task, the solution is more important than the method used to achieve it. The categories of the ideal notion of science are applicable primarily to non-human systems. So even if they work well within such systems, there is no reason why they should do so within human systems.

A physicist said something similar to me around the time: that the old uncle preaching the benefits of homeopathy in his living room is doing so not because he doesn’t have access to scientific knowledge. That may be true but what’s more conspicuous by absence is someone in the same room challenging his views, communicating to him without being intimidating or patronising and having a discussion with him about what’s right, what’s wrong and the methods we use to tell the difference. Instead, focusing on making it easier for scientists to become and remain scientists alone will not take us closer to achieving the outcomes the ‘March for Science’ desires.

Sarukkai echoed this point in a comment to The Print: that scientists who march only for science are not doing anything useful, and that they must march against casteism and sexism as well (and social ills outside their labs). Without real change in these social contexts, it’s going to be near-impossible for those deemed less powerful by structures in place in these contexts to challenge the beliefs of those afforded more social authority. Ultimately, effecting such change is not going to be all about money – just as much as more money alone won’t solve anything, just as much as imploring the government to “fix” all these issues by itself will not work either.

This is where VijayRaghavan’s comments about R&D spending fit in. Before we throw more money in the general direction of supporting R&D, its Augean stables will have to be cleaned out and inefficiencies eliminated. One example, apropos VijayRaghavan’s comment about 70% of funds being gridlocked due to “poor institutional processes”, comes immediately to mind.

Sunil Mukhi, a theoretical physicist, wrote in 2008 that when he had been a member of the faculty at the Tata Institute of Fundamental Research, Mumbai, his station afford him a variety of privileges even as there was “no clear statement of our responsibility or duty to perform, and no consequences for failing to do so”. While he has since acknowledged a potential flaw in his suggested solution, the fact remains that many researchers often laze in prized research positions at well-funded institutes instead of also having to grapple with the teaching and mentorship load prevalent at state universities and colleges.

Additionally, though most people have directed their ire at the government for underfunding R&D, 55% of our R&D expenditure is from the public kitty. Among the ‘superpowers’, China is a distant second at less than 20%. So the marches for science should also ask the private sector to cough up more.

One for all

When the government pulled the financial carpet out from under the feet of the Council of Scientific and Industrial Research in 2014 and asked its 38 labs to “go fund themselves”, many scientists were aghast that the council was being handicapped even as more money was being funnelled into pseudo-research on cow urine. But there were also many other scientists who said that the CSIR had it coming, that – as a network of labs set up to facilitate applied and translational research – it was bloated, sluggish and ripe for a pruning. Perhaps similar audits, though with ample stakeholder consultations (not the RSS) and without drastic consequences, are due for the national scientific establishment as a whole.

As a corollary, it is also true that every march, protest or agitation undertaken against casteism, sexism, patriarchy, bigotry and zealotry can work in favour of the scientific establishment since what ‘they’ are fighting against is also what scientists, and science journalists, should be fighting against. Access to bonafide scientific ideas should not be solely through textbooks, news articles and freewheeling chats on Twitter. Instead, and irrespective of whether they become available, they should have the option to be availed through the many day-to-day interactions in which we confront structures of caste and class.

For example, there is no reason the person who cleans your toilet should not also cook your dinner. To institute this dumb restriction is to perpetuate caste/class divisions as well as to reject science in the form of hand-wash fluids. For another, there is no reason an employer shouldn’t let their domestic help use the toilet when they need to. However, the practice of expecting those who work in our homes to use separate toilets or be fired still persists, even in a society as ostensibly post-caste as West Bengal’s, demonstrating “the extent to which employer relations with domestic workers continue to be flavoured by caste” – as well as the extent to which we falsely attribute different human bodies with irrational biological threats.

These problems are also relevant to scientists, and must be solved before we can confront the bigger, and more nebulous, order of scientific temper in the country. However, such problems can’t be fixed by scientists and science alone.

It is worth reiterating that the ‘March for Science’ tomorrow is not a lost cause; far from it, in fact. The demand that 3% of GDP be spent on R&D is entirely valid – but it also needs to be accompanied by structural reforms to be completely meaningful. So the march, in effect, is an opportunity to examine the checks and balances of science’s administration in the country, the place of science in society, and introspect on our responsibility to confront a protean problem and not back down in the face of easy solutions. If the solution was as easy as ramping up spending on R&D and education, the problem would have been solved long ago.

The Wire, 13 April 2018.

There’s something off about a new study that attempts to map the cognitive flexibility of people to their ideological preferences. To quote from the study’s ‘Significance’ section:

We found that individuals with strongly nationalistic attitudes tend to process information in a more categorical manner, even when tested on neutral cognitive tasks that are unrelated to their political beliefs. The relationship between these psychological characteristics and strong nationalistic attitudes was mediated by a tendency to support authoritarian, nationalistic, conservative, and system-justifying ideologies.

The intensity and extent of ideological divisions are being deepened across the world. This study examined over 300 citizens of the UK for “whether strict categorisation of stimuli and rules in objective cognitive tasks would be evident in strongly nationalistic individuals” – a nationalism indicated, for example, by these individuals being pro-Brexit. The results of the study could ostensibly apply to how certain groups around the world think: the extreme right in the US, the neo-Nazis in Germany, the National Front in France and the so-called “bhakts” in India.

These ideological divisions, imagined in the form of political polarisation, are bad enough as it is without people on one side of the aisle being able to accuse those on the other side of having “low cognitive flexibility”. The nuance can be worded as prosaically as the neuroscientists would prefer but this won’t – can’t – stop the less-nationalistic from accusing the more-nationalistic of simply being stupid, now with a purported scientific basis.

This is why I believe something has to be off about the study. The people on the right, as it were on the political spectrum, are not stupid. They’re smart just the way those of us on the left imagine ourselves to be. Now, one defence of the study may be that it attempts to map a hallmark feature of the global political right, sort of a rampant anti-intellectualism and irrationality, to its neurological underpinnings – but nationalism is more than its endorsement of traditions or traditional values.

While the outcomes of many socio-political actions may seem to promote irrational beliefs and practices, these actions are carefully engineered by very smart people and executed to perfection. One example that comes immediately to mind is the Bharatiya Janata Party’s social media strategy. Another is the resounding victory it achieved in the Lok Sabha and Uttar Pradesh elections in 2014 and 2017, resp.

(Both these enterprises are well-documented in the form of books – this and this, e.g. – and in fact make the less-nationalistic look quite silly for its sluggish group response. Would that say something about “our” cognitive abilities as well?)

Finally, a note about labels. Following astronomy research for half a decade has taught me that when stars explode, there is a tremendous variety of things that happen, such that it’s impossible for a five-century-old human enterprise to possibly identify, label, and categorise all of them within a small, finite group of processes. Similarly, trying to associate the symptoms of one infinite set (human socio-politics) with a finite-but-large set (human neurology) can be fraught with many mistakes.

I was once stupid too, and still am in many ways. One of the instances when I was more stupid than usual was when I wrote an article about the now-infamous BICEP2 ‘discovery’ of evidence of cosmic inflation in 2014. The ‘discovery’ eventually turned out to be a non-discovery because the scientists behind it had acted too soon with their announcement, overlooking a serious gap in their data.

As a science journalist, I’d failed because I hadn’t solicited independent comments for my piece, as a result letting The Hindu (where I worked at the time) publish an eminently wrong article. I will never forget that this happened, if only to remind myself of the importance of soliciting independent comments on all science articles, no matter how mundane the peg.

The BICEP2 instrument studies the cosmic microwave background (CMB) radiation. Some scientists were using BICEP2 to detect the imprint of gravitational waves on the magnetic component of the CMB radiation. Specifically, they were looking for some curling patterns in the magnetic mode associated with a rapid expansion of the universe thought to have happened between 10-36 and 10-33 seconds after the universe was born.

This expansion has been called the cosmic inflation and the period it happened, the inflationary epoch. Cosmic inflation was a hypothesis that sought to explain why parts of today’s universe seem to have similar physical features despite being separated by billions of lightyears. If cosmic inflation didhappen, the explanation would be that, once upon a time, the universe was very small and these distant parts were in fact more closely packed together then.

The first announcement, on March 17, 2014, was marked with a lot of fanfare. It was cosmology’s big day, and news publications around the world covered the announcement. Most of them included comments from scientists not involved in the data-taking, scientists who said something about the results was suspicious. That suspicion snowballed over time into a full-blown rebuttal that, within a few months, torpedoed the original study and forced the authors to apologise.

The problem turned out to be that gravitational waves couldcause the curling pattern on the magnetic mode of the CMB – and so could radiation emitted by cosmic dust, as seen by BICEP2. And the BICEP2 data was found to have recorded only the effects of cosmic dust.

In the last four years, I’ve realised how I had acted stupidly and learnt an important lesson the hard way. However, I was still curious why the BICEP2 team had acted stupidly. And though it seemed obvious, I had trouble accepting that the team had behaved the way it had simply because it was so excited, because it wanted to become famous.

On April 19 this year, Nautilus published an essay by Brian Keating, adapted from a book he has written about the BICEP2 fiasco. Keating was one of the leaders of the collaboration behind the announcement, working at Harvard University’s Centre for Astronomy (CfA). The essay provides a behind-the-scenes look at how scientists had missed the cosmic dust signal in their data analysis.

By the end of the essay, Keating appears to try to assuage readers that this was how science worked, that “you put out a result, and other scientists work to test the result”. However, the essay in toto highlights this is not how science works, and that this image of scientific endeavours is far too idealistic.

For example, a constant undercurrent throughout the enterprise seems to have been a rush to scoop. Keating et al had their eyes on a Nobel Prize, and wanted to be the first group to make the announcement that they’d seen the remains of the universe’s “birth pangs”.

He says this rush is why his team decided to present their BICEP2 results to the press even before the corresponding paper was peer-reviewed and published in a science journal. He writes:

… we feared that sending the paper to a journal would be unfair, giving a particular group – referees and their friends – a head start on proposal submission. My field is so competitive that the only people who weren’t on BICEP2 who could have reviewed the highly technical aspects of the paper were competitors. Our first priority was to make a scientific presentation to communicate our results to all our peers in the cosmology community.

Next, it seems the CfA team had been aware that dust in the Milky Way could play spoilsport to their apparent discovery, so they tried to get data from the team operating the Planck satellite. This satellite measures electromagnetic radiation across a wide swath of the sky, much larger than the BICEP2 survey area, and in a larger range of frequencies as well.

One of these frequencies was 353 GHz, at which Planck was able to study the effect of cosmic dust exclusively. The CfA team needed this data – but despite multiple requests, the Planck team refused to share the data. This is big news to me because I had no idea the CfA and the Planck teams treated each other as competitors! If only they’d worked together, the BICEP2 fiasco might never have happened.

… such a map [of cosmic dust] did exist, one with the exact high-frequency data we needed. There was only one catch: It belonged to our competitor, the Planck satellite. And in early 2014, the Planck team hadn’t yet released their B-mode polarization data. We were scared Planck might not only hold the key to proving our measurement right, but might have already glimpsed the inflationary B-mode signal before we did. … We desperately tried to work with the Planck team, while being careful not to tip them off as to what we’d found … [but they] wouldn’t cooperate. Either they didn’t have the data we wanted, or they did have it and they were going to scoop us. We had to go it alone.

Soon after, Keating and his team found a picture of a Powerpoint slide posted online that appeared to be from a talk given by one of the Planck team members. They decided to use the information presented in the slide, which suggested that BICEP2 had good and legitimate data, even though they weren’t sure if the slide was meant for quantitave analysis.

Thus, March 17 came and went, then June did too, when the CfA team’s paper was published in the journal Physical Review Letters. Then, around November, the Planck team had their paper published. As Keating writes,

With the Planck 353 GHz paper appearance came the beginning of the end of the BICEP2 team’s inflation elation. Although the Planck team was careful to release no data for the Southern Hole, the field where BICEP2 observed—perhaps out of fear we would digitize it—they made a blunt assessment of the potential amount of dust polarization contamination in the Southern Hole, saying it was of “the same magnitude as reported by BICEP2.” This meant dust was as likely a culprit for our B-modes as were inflationary gravitational waves.

The BICEP2 story well elucidates how science really works.

“Scientists are people too” is one way to put it. Another, and possibly better, way is to remember that institutionalised tendencies like torturing the data to yield more papers, conducting research to attract a Nobel Prize and scooping the competition aren’t one-offs, and that it’s foolish to think they wouldn’t percolate through the scientific community to create flawed ambitions.

These are all essential components of how humanity produces its knowledge. In other words, the scientific enterprise isn’t one that’s free of human foibles.

Featured image: The BICEP2 telescope (right) in Antarctica. Credit: Amble/Wikimedia Commons, CC BY-SA 3.0.

A friend of mine got harem pants and was talking about how much more comfortable they were than a lungi in Chennai’s current weather. A lungi is a long cylinder made of cloth (open at both ends of course) commonly worn by men in South India.

Five minutes later, our conversation included this statement:

2-manifolds with the same genus are homeomorphic.

Here’s how we got there, and a little more.

My friend’s a theoretical physicist. He works on string theory, which is a set of mathematical tools physicists use to solve problems about space and time.

To a physicist, a manifold is any surface. There are some specially defined manifolds that physicists use to understand how forces work.

For example, we’ve heard so much talk about Albert Einstein’s general theory of relativity, which describes how gravity works. When working with this theory, physicists assume that gravity is acting on the surface of the spacetime continuum. This surface is in the form of a so called Lorentzian manifold.

A numerical prefix to the manifold indicates the number of dimensions the surface has.

Say there’s an ant moving around on a sheet of paper. You can describe the ant’s position on the paper using two numbers: its distance from the length of the paper and its distance from the breadth.

For the ant, the surface it’s on has two dimensions – so it’s called a 2-manifold.

For humans, the surface of Earth is a 2-manifold. Humans can describe any point on Earth’s surface using two numbers: the latitude and the longitude coordinates.

Let’s take a slightly different shape called the torus.

A torus. Credit: Wikimedia Commons, CC BY-SA 3.0
A torus. Credit: Wikimedia Commons, CC BY-SA 3.0

A torus is a tube connected on itself, with a hole in the middle. Its surface is also a 2-manifold. According to the picture below, you can tell where you are on the torus by specifying the position of the red circle and your position on the red circle.

The surface of a torus. Credit: Wikimedia Commons
The surface of a torus. Credit: Wikimedia Commons

Now, let’s stick three toruses together like a fidget-spinner:

A triple torus. Credit: Wolfram Mathworld
A triple torus. Credit: Wolfram Mathworld

Its surface is still a 2-manifold because you still need only two numbers to describe your position on it: the position of a circle moving across the entire triple torus and your position on the circle.

Both a normal torus and a triple torus are 2-manifolds. However, they have an important difference: one has one hole and the other, three. This difference is important to physicists who study manifolds.

The number of holes in an object, as far as the physicist is concerned, is called the genus. The normal torus has genus 1. The triple torus has genus 3. A sphere has genus 0.

Let’s revisit the statement from above:

2-manifolds with the same genus are homeomorphic.

If two solids are homeomorphic, then one solid can be deformed such that it forms the other solid. One example is a lungi and a torus.

So what my friend’s saying with his statement is that if two solids whose surfaces are 2-manifolds also have the same number of holes, then one solid can be deformed into the other.

A famous example of this is the torus and the coffee mug. Both their surfaces are 2-manifolds. Both of them have the same genus, 1. (The coffee mug’s opening at the top is not considered a hole because it is closed at the other end.)

Credit: Wikimedia Commons
Credit: Wikimedia Commons

This is where the conversation between my friend and myself took an interesting turn.

The reason 2-manifolds with the same genus are homeomorphic is because all of them can be constructed using a combination of objects shaped like a pair of pants.

A pair of pants in topology. Credit: Jean Raimbault/Wikimedia Commons, CC BY-SA 4.0
A pair of pants in topology. Credit: Jean Raimbault/Wikimedia Commons, CC BY-SA 4.0

Mathematicians don’t have a different name for these objects. They are, in fact, called a pair of pants.

If you closed up the waist-rim of the pants and joined the two cuffs together, you’d get a normal torus. If you joined two pairs of pants by their waist-rims and joined the cuffs together at their respective ends. You’d get a double torus. Do this with three pairs of pants and you’d get a triple torus.

Some combination of these ‘pair of pants’ objects can be used to yield all the different kinds of 2-manifolds you can think of. So each pair of pants is like a nuclear unit, just like different combinations of protons and neutrons make up the nucleus of every different kind of atom in the world.

At this point, I proceeded to ask my friend about what kind of nuclear units make up 3-manifolds, surfaces on which you’d need three numbers to pinpoint your location.

He told me that it was a big unsolved problem in mathematics and physics, that mathematicians and physicists actually didn’t know.

The issue is with knowing how many different kinds of 3-manifolds there are. According to my friend, there could be millions upon millions – and that if you up came with a number, someone else would find a different 3-manifold that isn’t included in your set.

But there must be some way, some lead or indication of how we could go about it, I asked.

He said that mathematicians had been able to come up with a partial solution.

In our example, we used the genus as a differentiator. That is, 2-manifolds with different genuses were considered to be different kinds of 2-manifolds.

Instead, he said, mathematicians have used different differentiators other than genuses to describe the types of 3-manifolds.

They’ve found that if two 3-manifolds can be described by a fixed group of differentiators, then they may or may not be homeomorphic.

However, if two 3-manifolds can’t be described by the same group of differentiators, then they’re definitely nothomeomorphic.

It’s a sort of definition by exclusion, and that’s the best we have.

The Lorentzian manifold I mentioned above – the surface of the spacetime continuum on which gravity is thought to act – has four dimensions. It’s a 4-manifold. We have absolutely no idea how many types of 4-manifolds there are.

As I wonder on that, I’m going to get out my pair of pants, into my lungi and crash for the night. It’s so hot out here…

Since news of the Cambridge Analytica scandal broke last month, many of us have expressed apprehension – often on Facebook itself – that the social networking platform has transformed since its juvenile beginnings into an ugly monster.

Such moral panic is flawed and we ought to know that by now. After all, it’s been 50 years since 2001: A Space Odyssey was released, and a 100 since Frankenstein – both cultural assets that have withstood the proverbial test of time only because they managed to strike some deep, mostly unknown chord about the human condition, a note that continues to resonate with the passions of a world that likes to believe it has disrupted the course of history itself.

Gary Greenberg, a mental health professional and author, recently wrote that the similarities between Viktor Frankenstein’s monster and Facebook were unmistakable except on one count: the absence of a conscience was a bug in the monster, and remains a feature in Facebook. As a result, he wrote, “an invention whose genius lies in its programmed inability to sort the true from the false, opinion from fact, evil from good … is bound to be a remorseless, lumbering beast, one that does nothing other than … aggregate and distribute, and then to stand back and collect the fees.”

However, it is 2001‘s HAL 9000 that continues to be an allegory of choice in many ways, not least because it’s an artificial intelligence the likes of which we’re yet to confront in 2018 but have learnt to constantly anticipate. In the film, HAL serves as the onboard computer for an interplanetary spaceship carrying a crew of astronauts to a point near Jupiter, where a mysterious black monolith of alien origin has been spotted. Only HAL knows the real nature of the mission, which in Kafkaesque fashion is never revealed.

Within the logic-rules-all-until-it-doesn’t narrative canon that science fiction writers have abused for decades, HAL is not remarkable. But take him out into space, make sure he knows more than the humans he’s guiding and give him the ability to physically interfere in people’s lives – and you have not a villain waylaid by complicated Boolean algebra but a reflection of human hubris.

2001 was the cosmic extrapolation of Kubrick’s previous production, the madcap romp Dr Strangelove. While the two films differ significantly in the levels of moroseness on display as humankind confronts a threat to its existence, they’re both meditations on how humanity often leads itself towards disaster while believing it’s fixing itself and the world. In fact, in both films, the threat was weapons of mass destruction (WMDs). Kubrick intended for the Star Child in 2001‘s closing scenes to unleash nuclear holocaust on Earth – but he changed his mind later and chose to keep the ending open.

This is where HAL has been able to step in, in our public consciousness, as a caution against our over-optimism towards artificial intelligence and reminding us that WMDs can take different forms. Using the tools and methods of ‘Big Data’ and machine learning, machines have defeated human players at chess and go, solved problems in computer science and helped diagnose some diseases better. There is a long way to go for HAL-like artificial general intelligence, assuming that is even possible.

But in the meantime, we come across examples every week that these machines are nothing like what popular science fiction has taught us to expect. We have found that their algorithms often inherit the biases of their makers, and that their makers often don’t realise this until the issue is called out.

According to (the modified) Tesler’s theorem, “AI is whatever hasn’t been done yet”. When overlaid on optimism of the Silicon Valley variety, AI in our imagination suddenly becomes able to do what we have never been able to ourselves, even as we assume humans will still be in control. We forget that for AI to be truly AI, its intelligence should be indistinguishable from that of a human’s – a.k.a. the Turing test. In this situation, why do we expect AI to behave differently than we do?

We shouldn’t, and this is what HAL teaches us. His iconic descent into madness in 2001 reminds us that AI can go wonderfully right but it’s likelier to go wonderfully wrong if only because of the outcomes that we are not, and have never been, anticipating as a species. In fact, it has been argued that HAL never went mad but only appeared to do so because of the untenability of human expectations.

This is also what makes 2001 all the more memorable: its refusal to abandon the human perspective – noted for its amusing tendency to be tripped up by human will and agency – even as Kubrick and Arthur C. Clarke looked towards the stars for humankind’s salvation.

In the film’s opening scenes, a bunch of apes briefly interacts with a monolith just like the one near Jupiter and quickly develops the ability to use commonplace objects as tools and weapons. The rest is history, so the story suddenly jumps four million years ahead and then 18 months more. As the Tool song goes, “Silly monkeys, give them thumbs, they make a club and beat their brother down.”

In much the same way, HAL recalls the origins of mainstream AI research as it happened in the late 1950s at the Massachusetts Institute of Technology (MIT), Boston. At the time, the linguist and not-yet-activist Noam Chomsky had reimagined the inner workings of the human brain as those of a computer (specifically, as a “Language Acquisition Device”). According to anthropologist Chris Knight, this ‘act’ inspiredcognitive scientist Marvin Minsky to wonder if the mind, in the form of software, could be separated from the body, the hardware.

Minsky would later say, “The most important thing about each person is the data, and the programs in the data that are in the brain”. This is chillingly evocative of what Facebook has achieved in 2018: to paraphrase Greenberg, it has enabled data-driven politics by digitising and monetising “a trove of intimate detail about billions of people”.

Minsky founded the AI Lab at MIT in 1959. Less than a decade later, he joined the production team of 2001 as a consultant to design and execute the character called HAL. As much as we’re fond of celebrating the prophetic power of 2001, perhaps the film was able to herald the 21st century as well as it has because we inherited it from many of the men who shaped the 20th, and Kubrick and Clarke simply mapped their visions onto the stars.