How small universities can game rankings

How small universities can game rankings
Gandhi Bhawan at Panjab University. Credit: Wikimedia Commons

The Wire
June 28, 2015

Gandhi Bhawan at Panjab University. Credit: Wikimedia Commons
Gandhi Bhawan at Panjab University. Credit: Wikimedia Commons

A recent survey by the influential Nature science journal ranked Panjab University as the first in India, with the Tata Institute of Fundamental Research a close second, based on the number of citations of papers stemming from them. Panjab University is a public university, and its place of pride in the rankings raised many eyebrows. Its contributions to the wider scientific community, as well as to government projects, are relatively less known than, say, the Indian Institute of Science, which boasts of a far broader and more visible body of work while still being ranked a few rungs below.

A good placement on these rankings invites favourable interest toward the universities, attracting researchers and students as well as increasing opportunities for raising funds. At the same time, using a single number to rank universities possesses unique dangers – much like using the impact factor to judge a researcher’s work has been derided for its inability to account for qualitative contributions. P. Sriram, the dean of administration at IIT-Madras, suggests that universities could be biasing rankings in their favour thanks to technical loopholes. They would still be doing good work, he says, but the techniques of organizations like Nature will remain blind to the inequity of placing some universities farther down the list than they deserved to be.

For example, Panjab University has a strong physics department that’s been associated with the Large Hadron Collider experiments in Europe. In the context of scientific publishing, these experiments are known for including the name of every member of the collaboration as authors on all papers based on their results. In May 2015, for example, the ATLAS and CMS collaborations published a 33-page article in the journal Physical Review Letters with a combined authorship of 5,154 – a world record. If a team from University X was part of this group, then the paper will count against X’s research output that year.

“Looking at the example of Panjab University, the contributors to the ATLAS collaboration are unquestionably respected researchers,” Prof. Sriram said, “but the issue is whether their contributions should be counted against the ATLAS collaboration or Panjab University.”

He recommended that these ‘mega-author’ papers be considered more carefully when assembling university rankings, especially when a university is the source of far fewer papers that’re not the outcomes of collaborative work. “If the volume of publications and citations outside of the collaboration is not significant, what is reported as the institutional bibliometric information is actually the bibliometric of the collaborations,” he clarified.

Indeed, the rankings – like Nature‘s or the more widely used Times Higher Education list – have become susceptible to being gamed. It isn’t very hard for any university to became part of a sufficient number of international collaborations “as the scientific requirements are fairly modest”. Then, Prof. Sriram explained that their rankings would give the impression of them being far ahead of even the Massachusetts Institute of Technology while in reality they would be far behind, thanks to the mistake of conflating citations and research quality.

A telling sign of this emerged in the Times Higher Education report published in June 2015. It showed that Panjab University scored the lowest among all Asian universities it surveyed, clocking 10.5, for research. Overall, it was able to come out on top with a total score of 84.4 thanks to better numbers for teaching, industrial income and international outlook. According to Prof. Sriram’s calculations as well, to quote from a critical letter (paywall) he sent to Nature: “about 20% of the publications attributed to the highly rated Panjab University have long author lists and contribute almost two-thirds of the citations. Excluding these papers reduces Panjab University’s citation impact ratio from 1.4 to 0.7, causing it to drop out of [Nature’s] top ten”.

Even so, the joust for getting placed well in rankings is only the symptom of a larger problem: How do you evaluate research output in a country? The Nature ranking was based on citations data in the SCOPUS database, filtered by Indian institutions that had produced at least 2,000 papers between 2010 and 2014. The Times Higher Education rankings are based on the Thomson-Reuters Web of Science database. For Prof. Sriram, neither are able to account for the contexts in which the universities function.

“If a common man asks me what I have done in return for his tax money, I can tell him about the work I’ve done for the government,” Prof. Sriram said, adding that such services should place him and his institution above those doing lesser or no such work in the eyes of his funders. In fact, the Ministry of Human Resources and Development has set up a small committee headed by Bhaskar Ramamurthi, Director at IIT-Madras, to develop a India-specific ranking system for just this purpose.