A caveat for peer-review

Now that more researchers are finding more holes in the study in The Lancet, which claimed hydroxychloroquine – far from being a saviour of people with COVID-19 – actually harms them, I wonder where the people are who’ve been hollering that preprint servers be shut down because they harm people during a pandemic.

The Lancet study, entitled ‘Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis’, was published online on May 22, 2020. Quoting from the section on its findings:

After controlling for multiple confounding factors (age, sex, race or ethnicity, body-mass index, underlying cardiovascular disease and its risk factors, diabetes, underlying lung disease, smoking, immunosuppressed condition, and baseline disease severity), when compared with mortality in the control group, hydroxychloroquine, hydroxychloroquine with a macrolide, chloroquine, and chloroquine with a macrolide were each independently associated with an increased risk of in-hospital mortality. Compared with the control group, hydroxychloroquine, hydroxychloroquine with a macrolide, chloroquine, and chloroquine with a macrolide were independently associated with an increased risk of de-novo ventricular arrhythmia during hospitalisation.

I assume it was peer-reviewed. According to the journal’s website, there is an option for researchers to have their submission fast-tracked if they’re so eligible (emphasis added):

All randomised controlled trials are eligible for Swift+, our fastest route to publication. Our editors will provide a decision within 10 working days; if sent for review this will include full peer review. If accepted, publication online will occur within another 10 working days (10+10). For research papers, which will usually be randomised controlled trials, judged eligible for consideration by the journal’s staff will be peer-reviewed within 72 h and, if accepted, published within 4 weeks of receipt.

The statistician Andrew Gelman featured two critiques on his blog, both by a James Watson, on May 24 and May 25. There are many others, including from other researchers, but these two provide a good indication of the extent to and ways in which the results could be wrong. On May 24:

… seeing such huge effects really suggests that some very big confounders have not been properly adjusted for. What’s interesting is that the New England Journal of Medicine published a very similar study a few weeks ago where they saw no effect on mortality. Guess what, they had much more detailed data on patient severity. One thing that the authors of the Lancet paper didn’t do, which they could have done: If HCQ/CQ is killing people, you would expect a dose (mg/kg) effect. There is very large variation in the doses that the hospitals are giving … . Our group has already shown that in chloroquine self-poisoning, death is highly predictable from dose. No dose effect would suggest it’s mostly confounding. In short, it’s a pretty poor dataset and the results, if interpreted literally, could massively damage ongoing randomised trials of HCQ/CQ.

On May 25:

The study only has four authors, which is weird for a global study in 96,000 patients (and no acknowledgements at the end of the paper). Studies like this in medicine usually would have 50-100 authors (often in some kind of collaborative group). The data come from the “Surgical Outcomes Collaborative”, which is in fact a company. The CEO (Sapan Desai) is the second author. One of the comments on the blog post is “I was surprised to see that the data have not been analysed using a hierarchical model”. But not only do they not use hierarchical modelling and they do not appear to be adjusting by hospital/country, they also give almost no information about the different hospitals: which countries (just continent level), how the treated vs not treated are distributed across hospitals, etc.

(Gelman notes in a postscript that “we know from experience that The Lancet can make mistakes. Peer review is nothing at all compared to open review.” – So I’m confident the study’s paper was peer-reviewed before it was published.)

Perhaps it’s time we attached a caveat to claims drawn from peer-reviewed papers: that “the results have been peer-reviewed but that doesn’t have to mean they’re right”, just as journalists are already expected to note that “preprint papers haven’t been peer-reviewed yet”.