Is science getting slower?

Adam Day
4 min readSep 18, 2023

TL;DR: peer-review times appear to have been growing for a long time. The effects of COVID-lockdowns on peer-review are surprising.

A few months ago, I was invited to referee a research paper. So, I guess the editor thought that I was one of the 2 best people in the entire world to review this thing. That’s how it works, right?

Flattered, I flagged the editor’s email to signal its importance and I got straight to work!

Stopwatch (image CC-0 by Jean Beaufort)

A few days before the deadline (and after expending considerable time on the task), I received another email from the editor which I opened excitedly. The email said that he had now received THREE referee reports on the manuscript and no longer needed mine…

So, in this case, there must have been at least 4 referees (including me) considering the manuscript at the same time. (It’s anyone’s guess how many invitations were sent out.)

I dragged the editor’s emails over to my junk folder.

It’s obvious that journals can reduce processing times by inviting more reviewers than they need. But you only need to think about the consequences of doing this to see that it leads to rapid exhaustion of the journal’s referee pool and, therefore, causes longer processing times on average. Referee time is a finite resource!

Anecdotally, we hear that researchers (i.e. referees) are overstretched with increasing admin work. We also see a rapidly growing number of articles written and submitted for review each year. So that means that, if referee time is indeed a finite resource, we should be seeing peer-review times rising.

In my work at Clear Skies I’ve heard from publishers that, often, fake papers from papermills get through because the publisher can’t find reviewers for them. All the best reviewers are busy, so they end up with reviewers who aren’t well suited to the task, or even the author’s own recommended referees and we’ve seen how that leads to manipulation of the peer-review process. I’ve said before that peer-review is actually a very powerful tool for dealing with papermills. So, in these cases that we hear about, simply having the right referees available could have prevented publication of these papers.

As far as I can see, there has never been a published guideline on how many referees journals should contact concurrently. There are, of course, times when more than 2 opinions are required. So perhaps we need such guidelines. Introducing them is a simple measure that could do a lot of good.

And the data bears this out

Submission dates are available in most article XML and are also available in Crossref data. The data isn’t perfect. A lot of it is missing and it isn’t created in a consistent way. (Reminder: there are some very good reasons for publishers to deposit this data!) However, there’s enough that I think we can use it to get a good idea of what is going on.

Here are those processing times for the available data.

What are we seeing here?

First of all, it appears that processing times are rising. There are a lot of possible explanations for this, but lack of available referee-time seems like a strong candidate (along with all of its potential causes).

But do you notice something else?

During the COVID pandemic, submissions to all publishers rose sharply.

So, peer-review was already stretched, submissions were up, and scientists were all working hard trying to do whatever they could to help deal with the pandemic. So peer-review times went up, right?


They went down!

They went down in 2020 in-step with the first period of global lockdown and then they went down again in 2021 in-step with the second lot of major global lockdowns. If you break the data down by publisher, you see the same pattern in almost every publisher’s data.**

Why did that happen? I can think of 2 possible explanations:

  1. Overtime. E.g. referees and editors had nothing to do in the evenings and weekends during lockdowns. So they did peer review.
  2. Working from home more generally. If there are fewer meetings, fewer distractions, and a preferred working-environment, that might help a lot with productivity.

So, in summary, the data seems to support the idea that referee-time is indeed a scarce resource and that, when there is more of it, this leads to a reduction in processing times.

In this climate, anything we can do to reduce the burden of peer-review on reviewers would be a good thing.


** It’s worth adding that, if you break this data down by publisher, you will see various different patterns for different organisations.

At this level, the data should be taken with a big pinch of salt.

  • Publishers may record dates in different ways. E.g. a publisher might log the submission date of a manuscript as the “submission date”, or they might log the date that a manuscript is re-submitted after rejection or revision. The latter option would lead to the publisher recording much shorter processing times. So, if 2 publishers differ in how they record dates, there could be a big disparity in their figures.
  • Some publishers publish large amounts of externally reviewed content like proceedings or special issues. Those will have abnormally short processing times and will skew the data for an individual publisher.

So, in short, we probably can’t use this to compare publishers accurately. But it certainly is an interesting thing to do despite the aforementioned salinity.



Adam Day

Creator of Clear Skies, the Papermill Alarm and other tools #python #machinelearning #ai #researchintegrity