Of all the scientific journals in the wild, there are a few I keep a closer eye on: they publish interesting results but more importantly they have been forward-thinking on matters of scientific publishing and they’ve also displayed a tendency to think out loud (through blog posts, say) and actively consider public feedback. Reading what they publish in these posts, and following the discussions that envelope them, has given me many useful insights into how scientific publishing works and, perhaps more importantly, how the perceptions surrounding this enterprise are shaped and play out.
One such journal is eLife. All their papers are open access, and they also publish the papers’ authors’ notes and reviewers’ comments with each paper. They also have a lively ‘magazine’ section in which they publish articles and essays by working scientists – especially younger ones – relating to the extended social environments in which knowledge-work happens. Now, for some reason, I’d cast PeerJ in similarly progressive light, even though I hadn’t visited their website in a long time. But on August 16, PeerJ published the following tweet:
It struck me as a weird decision (not that anyone cares). Since the article explaining the journal’s decision appears to be available under a Creative Commons Attribution license, I’m reproducing it here in full so that I can annotate my way through it.
Since our launch, PeerJ has worked towards the goal of publishing all “Sound Science”, as cost effectively as possible, for the benefit of the scientific community and society. As a result we have, until now, evaluated articles based only on an objective determination of scientific and methodological soundness, not on subjective determinations of impact, novelty or interest.
At the same time, at the core of our mission has been a promise to give researchers more influence over the publishing process and to listen to community feedback over how peer review should work and how research should be assessed.
Great.
In recent months we have been thinking long and hard about feedback, from both our Editorial Board and Reviewers, that certain articles should no longer be considered as valid candidates for peer review or formal publication: that whilst the science they present may be “sound”, it is not of enough value to either the scientific record, the scientific community, or society, to justify being peer-reviewed or be considered for publication in a peer-reviewed journal. Our Editorial Board Members have asked us that we do our best to identify such submissions before they enter peer review.
This is the confusing part. To the uninitiated: One type of the scientific publishing process involves scientists writing up a paper and submitting it to a journal for consideration. An editor, or editors, at the journal checks the paper and then commissions a group of independent experts on the same topic to review it. These experts are expected to provide comments to help the journal decide whether it should publish the paper, and if yes, if the paper can be improved. Note that they are usually not paid for their work or time.
Now, if PeerJ’s usual reviewers are unhappy with how many papers the journal’s asking them to review, how does it make sense to impose a new, arbitrary and honestly counterproductive sort of “value” on submissions instead of increasing the number of reviewers the journal works with?
I find the journal’s decision troublesome because some important details are missing – details that encompass borderline-unethical activities by some other journals that have only undermined the integrity and usefulness of the scientific literature. For example, the “high impact factor” journal Nature has asked its reviewers in the past to prioritise sensational results over glamorous ones, overlooking the fact that such results are also likelier to be wrong. For another example, the concept of pre-registration has started to become more recently simply because most journals used to refuse (and still do) negative results. That is, if a group of scientists set out to check if something was true – and it’d be amazing if it was true – and found that it was false instead, they’d have a tough time finding a journal willing to publish their paper.
And third, preprint papers have started to become an acceptable way of publishing research only in the last few years, and that too only in a few branches of science (especially physics). Most grant-giving and research institutions still prefer papers being published in journals, instead of being uploaded on preprint repositories, not to mention a dominant research culture in many countries – including India – still favouring arbitrarily defined “prestigious journals” over others when it comes to picking scientists for promotions, etc.
For these reasons, any decision by a journal that says sound science and methodological rigour alone won’t suffice to ‘admit’ a paper into their pages risks reinforcing – directly or indirectly – a bias in the scientific record that many scientists are working hard to move away from. For example, if PeerJ rejects a solid paper, to speak, because it ‘only’ confirms a previous discovery, improves its accuracy, etc. and doesn’t fill a knowledge gap, per se, in order to ease the burden on its reviewers, the scientific record still stands to lose out on an important submission. (It pays to review journals’ decisions assuming that each journal is the only one around – à la the categorical imperative – and that other journals don’t exist.)
So what are PeerJ‘s new criteria for rejecting papers?
As a result, we have been working with key stakeholders to develop new ways to evaluate submissions and are introducing new pre-review evaluation criteria, which we will initially apply to papers submitted to our new Medical Sections, followed soon after by all subject areas. These evaluation criteria will define clearer standards for the requirements of certain types of articles in those areas. For example, bioinformatic analyses of already published data sets will need to meet more stringent reporting and data analysis requirements, and will need to clearly demonstrate that they are addressing a meaningful knowledge gap in the literature.
We don’t know yet, it seems.
At some level, of course, this means that PeerJ is moving away from the concept of peer reviewing all sound science. To be absolutely clear, this does not mean we have an intention of becoming a highly-selective “glamour” journal publisher that publishes only the most novel breakthroughs. It also does not mean that we will stop publishing negative or null results. However, the feedback we have received is that the definition of what constitutes a valid candidate for publication needs to evolve.
To be honest, this is a laughable position. The journal admits in the first sentence of this paragraph that no matter where it goes from here, it will only recede from an ideal position. In the next sentence it denies (vehemently, considering in the article on its website, this sentence was in bold) its decision is a move that will transform it into a “glamour” journal – like Nature, Science, NEJM, etc. have been – nor, in the third sentence, that it will stop publishing “negative or null results”. Now I’m even more curious what these heuristics could be which specify that a) submissions have to have “sound science”, b) “address a meaningful knowledge gap”, and c) don’t exclude negative/null results. It’s possible to see some overlap between these requirements that some papers will occupy – but it’s also possible to see many papers that won’t tick all three boxes yet still deserve to be published. To echo PeerJ itself, being a “glamour” journal is only one way to be bad.
We are being influenced by the researchers who peer review our research articles. We have heard from so many of our editorial board members and reviewers that they feel swamped by peer review requests and that they – and the system more widely – are close to breaking point. We most regularly hear this frustration when papers that they are reviewing do not, in their expert opinion, make a meaningful contribution to the record and are destined to be rejected; and should, in their view, have been filtered out much sooner in the process.
If you ask me (as an editor), the first sentence’s syntax seems to suggest PeerJ is being forced by its reviewers, and not influenced. More importantly, I haven’t seen these bespoke problematic papers that are “sound” but at the same time don’t make a meaningful contribution. An expert’s opinion that a paper on some topic should be rejected (even though, again, it’s “sound science”) could be rooted either in an “arrogant gatekeeper” attitude or in valid reasons, and PeerJ‘s rules should be good enough to be able to differentiate between the two without simultaneously allowing ‘bad reviewers’ to over-“influence” the selection process.
More broadly, I’m a science journalist looking into science from the outside, seeing a colossal knowledge-producing machine that’s situated on the same continuum on which I see myself to be located. If I receive too many submissions at The Wire Science, I don’t make presumptuous comments about what I think should and shouldn’t belong in the public domain. Instead, I pitch my boss about hiring one more person on my team and, second, I’m honest with each submission’s author about why I’m rejecting it: “I’m sorry, I’m short on time.”
Such submissions, in turn, impact the peer review of articles that do make a very significant contribution to the literature, research and society – the congestion of the peer review process can mean assigning editors and finding peer reviewers takes more time, potentially delaying important additions to the scientific record.
Gatekeeping by another name?
Furthermore, because it can be difficult and in some cases impossible to assign an Academic Editor and/or reviewers, authors can be faced with frustratingly long waits only to receive the bad news that their article has been rejected or, in the worst cases, that we were unable to peer review their paper. We believe that by listening to this feedback from our communities and removing some of the congestion from the peer review process, we will provide a better, more efficient, experience for everyone.
Ultimately, it comes down to the rules by which PeerJ‘s editorial board is going to decide which papers are ‘worth it’ and which aren’t. And admittedly, without knowing these rules, it’s hard to judge PeerJ – except on one count: “sound science” is already a good enough rule by which to determine the quality of a scientist’s work. To say it doesn’t suffice for reasons unrelated to scientific publishing, and the publishing apparatus’s dangerous tendency to gatekeep based on factors that have little to do with science, sounds at least precarious.