This is another Ranga Dias and superconductivity post, so please avert your gaze if you’re tired of it already.
According to a September 27 report in Science, the journal Nature plans to retract the latest Dias et al. paper, published in March 2023, claiming to have found evidence of near-room-temperature superconductivity in an unusual material, nitrogen-doped lutetium hydride (N-LuH). The heart of the matter seems to be, per Science, a plot showing a drop in N-LuH’s electric resistance below a particular temperature – a famous sign of superconductivity.
Dias (University of Rochester) and Ashkan Salamat (University of Nevada, Las Vegas), the other lead investigator in the study, measured the resistance in a noisy setting and then subtracted the noise – or what they claimed to be the noise. The problem is apparently that the subtracted plot in the published paper and the plot put together using raw data submitted by Dias and Salamat to Nature are different; the latter doesn’t show the resistance dropping to zero. Meaning that together with the noise, the paper’s authors subtracted some other information as well, and whatever was left behind suggested N-LuH had become superconducting.
A little more than a month ago, Physical Review Letters officially retracted another paper of a study led by Dias and Salamat after publishing it last year – and notably after a similar dispute (and on both occasions Dias was opposed to having the papers retracted). But the narrative was more dramatic then, with Physical Review Letters accusing Salamat of obstructing its investigation by supplying some other data as the raw data for its independent probe.
Then again, even before Science‘s report, other scientists in the same field had said that they weren’t bothering with replicating the data in the N-LuH paper because they had already wasted time trying to replicate Dias’s previous work, in vain.
Now, in the last year alone, three of Dias’s superconductivity-related papers have been retracted. But as on previous occasions, the new report also raises questions about Nature‘s pre-publication peer-review process. To quote Science:
In response to [James Hamlin and Brad Ramshaw’s critique of the subtracted plot], Nature initiated a post-publication review process, soliciting feedback from four independent experts. In documents obtained by Science, all four referees expressed strong concerns about the credibility of the data. ‘I fail to understand why the authors … are not willing or able to provide clear and timely responses,’ wrote one of the anonymous referees. ‘Without such responses the credibility of the published results are in question.’ A second referee went further, writing: ‘I strongly recommend that the article by R. Dias and A. Salamat be retracted.’
What was the difference between this review process and the one that happened before the paper was published, in which Nature‘s editors would have written to independent experts asking them for their opinions on the submitted manuscript? Why didn’t they catch the problem with the electrical resistance plot?
One possible explanation is the sampling problem: when writing an article as a science journalist, the views expressed in the article will be a function of the scientists that I have sampled from within the scientific community. In order to obtain the consensus view, I need to sample a sufficiently large number of scientists (or a small number of representative scientists, such as those who I know are in touch with the pulse of the community). Otherwise, there’s a nontrivial risk of some view in my article being over- or under-represented.
Similarly, during its pre-publication peer-review process, did Nature not sample the right set of reviewers? I’m unable to think of other explanations because the sampling problem accounts for many alternatives. Hamlin and Ramshaw also didn’t necessarily have access to more data than Dias et al. submitted to Nature because their criticism emerged in May 2023 itself, and was based on the published paper. Nature also hasn’t disclosed the pre-publication reviewers’ reports nor explained if there were any differences between its sampling process in the pre- and post-publication phases.
So short of there being a good explanation, as much as we have a scientist who’s seemingly been crying wolf about room-temperature superconductivity, we also have a journal whose peer-review process produced, on two separate occasions, two different results. Unless it can clarify why this isn’t so, Nature is also to blame for the paper’s fate.