Doing less

“The case for doing less in our peer reviews” by Kate Derickson is an interesting essay on scientific reviews.

While it is a luxury to receive thorough and carefully thought-out comments from a colleague, the nature of blind peer review means that the author cannot know who is making suggestions […] And yet, the author is often relying on the paper being published for professional security or advancement. This puts the author in the position of being obligated to rework their arguments according to constructive suggestions made by an anonymous person whose credibility or self-interest they cannot assess. Moreover, while reviewers often identify similar issues in a paper, they often propose a variety of different approaches to addressing them, many of which work at cross purposes. Authors can be overwhelmed by the range of suggestions, feeling obligated to split the difference and cover all the bases in case the paper goes back to all three reviewers. While papers generally get better through the review process, authors often have a difficult time navigating contradictory reviewer suggestions.

But wait, there is also a point where I do not agree (in the light of the recent elife decision).

we think carefully about what we decide to send out for peer review, in order to enable us to curate a table of contents that we think is at the cutting edge of our disciplines and of interest to our readership.

Creating the most cited journal? Creating cutting edge? This is a pre-internet 1980’s attitude of  a journal editor trying to get a higher citation impact in the competition with other journals. It simply devalues everything that Derickson does not understand or that Derickson does not want to promote.

So my initial enthusiasm of the paper finally dies with “the biggest scientific experiment

Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure

Yes, this is about the end of scholarly peer review as peer review fails  to catch major errors in about 1/3 of all papers.

In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward.

The focus is on “cutting edge” and “interest” aka impact points but  neither on ingenious minds nor brilliant discoveries.