Tag Archives: peer_review

Peer Review Lottery

From a recent call for a conference in my mailbox ( July 17th, Orlando, Florida, KGCM 2012

Richard Smith also affirmed that regarding peer review there is “more evidence of harm than benefit…[and] Studies so far have shown that it is slow, expensive, ineffective, something of a lottery, prone to bias and abuse, and hopeless at spotting errors and fraud.”

Smith, R, 2006, “The trouble with medical journals,” Journal of the Royal Society of Medicine, Vol. 99, March, 2006, p. 116 (accessed at http://jrsm.rsmjournals.com/content/99/3/115.full.pdf)

Original research in blogs

It does not seem very unusual to have original research in blog posts. Evolgen is doing that by currently publishing a series exploring the evolution of a duplicated gene in the genus Drosophila. So finally science is more than knowing which is a high impact journal – peer review may be replaced comments below.

Less is more

–Day 3 of Just Science Week–

Peer review certainly plays a major role in assuring quality of science. There are many positive aspects of peer review (plus a few disadvantages like promoting mainstream). Systematic research on peer review, however, has been largely absent until 2 decades ago; after 5 international conferences on peer review there is now also the WAME association of journal editors. Over the years, I have experienced the “cumulative wisdom” thrown at my own papers and of course developed my own style when doing reviews. Last week PLOS medicine published an interesting study who makes a good peer review:

These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal’s editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.

The first finding may be unimportant for non-medics but the second may apply to a larger audience. What I fear – and that is usually not mentioned in the current discussion – that the peer review system is slowly suffocating. The willingness to do this (unpaid & extra) work is going down as papers (at least in my field) are produced more and more an industrial mass production level. I am getting a review request nearly every second day while I do need between 30 minutes and 3 hours for a paper. So, less is more.


For a follow up go to sciencesque, a scenario how science in the post-review phase will work.

Open peer review failed

It was an interesting experiment that started on June, 1 in the Nature office: a first trial of of open peer review. Of the 10,000 papers received every year, 6,000 are immediately rejected and eventually 700 published after peer review. The result of the trial, however, is disappointing:

We sent out a total of 1,369 papers for review during the trial period. The authors of 71 (or 5%) of these agreed to their papers being displayed for open comment. Of the displayed papers, 33 received no comments, while 38 (54%) received a total of 92 technical comments.

The trial provoked some web traffic with approx. 800 page views/day. Welcome back to the altruism thread, the discussion may be followed at their blog, yea, yea.