You want to submit a paper to a journal for consideration. It takes, say, 3 months on average for a paper to go through peer-review, so if your paper gets rejected, you’ve lost 3 months of your time. Wouldn’t it be better to hedge your bets? Submit the paper to multiple journals all at once. Then, hopefully, one will accept the paper.
Duplicate submission is a very simple form of misconduct. It’s where an author of a research manuscript submits that manuscript for peer-review to 2 or more journals. It sounds almost sensible — wouldn’t it be a more efficient way to get papers through peer-review quickly?
What’s wrong with this hedge?
I’ll give you the list!
- Think about the costs. If we make a toy model for our journal: say $1,000 in costs for each peer review, acceptance rate of 50%, and a payment of $3,000 from each accepted article. Under this model, there are $2,000 in costs and $1,000 profit from every published paper. If we assume that papers are duplicate submitted to just 1 other journal, then all things being equal, our acceptance rate halves again. So now we need to process 4 papers at a cost of $4,000 for every $3,000 we make. The reality is more complex than this, different journals have different costs, acceptance rates and charges. So it might not always cause a loss, but it goes without saying that duplicate submission increases costs and that could do very significant harm.
- Referee-time. You only need to think about it to see an obvious problem. Each paper goes to 2 or more referees, so duplicating that process starts to absorb a lot of the available referee-time. If everyone starts doing it, eventually we reach a point where the resources don’t exist to deal with new submissions. (And there are already reports of journals struggling to find peer-reviewers for submitted manuscripts.)
- Duplicate submission is used routinely by papermills to get fake papers through peer-review. It’s actually a requirement of their business model. They need to deliver published papers to authors quickly. If they have to submit to multiple journals consecutively, they can’t do that.
- Duplicate submission sometimes leads to the same paper being published in 2 different places. That’s bad. The author will almost certainly end up with 1 or both papers retracted if that happens.
- Duplicate submission is a breach of contract with the publisher. It’s not a breach of some clause buried deep in a list of terms & conditions that no one reads. It’s usually an explicit check-box on any submission form, so it’s very clear that the author would know that it isn’t considered ok. We therefore know that it’s dishonest behaviour that causes people to duplicate-submit and not simply a naïve mistake.
So, let’s think about the value of cracking-down on duplicate submission:
- It would increase the profitability and sustainability of journals.
- It would prevent low-quality work being published.
- It would allow identification of a large swathe of the fake research published by papermills. That would help to characterise the problem and determine the best reaction.
But how do we do that? It turns out it’s easy.
As I said last week
I’ve got a few plans for blog posts about papermills. I’m being very careful about what I post — I think it’s important to avoid posting anything that could be used by papermills to avoid getting caught.
However, if I think of something valuable to share, I’ll do that.
- If you’re a publisher reading this, tell your friends.
- Also… find the person in your organisation who uploads your data to Crossref. Find them and tell them to follow this blog.
- You might also particularly recommend this blog to your data scientists, developers, engineers and any other technical folks with an interest in research integrity. There will be things for them here.