Tag Archives: peer review

Peer review is science roulette

One of the best essays that I have read about current science.

Ricky Lanusse. How Peer Review Became Science’s Most Dangerous Illusion. https://medium.com/the-quantastic-journal/how-peer-review-became-sciences-most-dangerous-illusion-54cf13da517c

Peer review is far from a firewall. In most cases, it’s just a paper trail that may have even encouraged bad research. The system we’ve trusted to verify scientific truth is fundamentally unreliable — a lie detector that’s been lying to us all along.
Let’s be bold for a minute: If peer review worked, scientists would act like it mattered. They don’t. When a paper gets rejected, most researchers don’t tear it apart, revise it, rethink it. They just repackage and resubmit — often word-for-word — to  another journal. Same lottery ticket in a different draw mindset.  Peer review is science roulette.
Once the papers are in, the reviews disappear. Some journals publish them. Most shred them. No one knows what the reviewer said. No one cares. If peer review were actually a quality check, we’d treat those comments like gospel. That’s what I value about RealClimate [PubPeer, my addition]: it provides insights we don’t get to see in formal reviews. Their blog posts and discussions — none of which have been published behind paywalls in journals — often carry more weight than peer-reviewed science.

CC-BY-NC

Is it a crime to use AI for peer review?

I consult the almighty chatGPT frequently for additional information as this saves me hours of wading through my own database, Pubmed, Scholar and Goggle Hits.

But I have my own opinion, I never cut & paste as this is always running at risk (1) to plagiarize unknowingly and (2) to produce nonsense.

Miryam Naddaf has an article about this

In a survey of nearly 5,000 researchers, some 19% said they had already tried using LLMs to ‘increase the speed and ease’ of their review. But the survey, by publisher Wiley, headquartered in Hoboken, New Jersey, didn’t interrogate the balance between using LLMs to touch up prose, and relying on the AI to generate the review.

And well, maybe I am already sticking to the NEJM that said

Although human expert review should continue to be the foundation of the scientific process, LLM feedback could benefit researchers

CC-BY-NC

Double blind peer review – well-intentioned but too many side effects

A new paper by CA Mebane in Environmental Toxicology and Chemistry now makes six arguments for why double-blind peer review practices increase vulnerability to scientific integrity lapses by

(1) obscuring data from reviewers is detrimental
(2) obscuring sponsorship makes bias harder to detect
(3) author networks can be revealing
(4) undue trust and responsibility are placed on editors
(5) double-blind reviews are not really all that blind
(6) willful blindness is not the answer to prestige bias.

And here are his 5 recommendations for improving scientific integrity

(1) Require persistent identifiers, i.e., ORCIDs, and encourage
durable email addresses from all authors, not just the corresponding
author
(2) Withhold author information from the review invitation
emails
(3) Conduct the review in the usual single-blind style, with
reviewers having full access to all the same manuscript
materials as the editors, except the cover letter
(4) Cross-review and drop the ‘confidential comments to the editor’
option
(5) Open review reports: Publish the anonymous peer review
reports and author responses as online supplements.

CC-BY-NC

elife delisted

Elife is one of the most interesting scientific journals with a full history at Wikipedia.  The Elife board  introduced  in 2022 that

From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprint

with the unfortunate but foreseeable consequence that Elife now does not get anymore an impact factor

Clarivate, the provider of the Web of Science platform, said it would not provide impact factors to journals that publish papers not recommended for publication by reviewers.

I don’t care about impact factors. I also do not care about Clarivate or any other Private-Equity-company as we don’t need this kind of business in science. Elife however will loose it’s value in particular as system still has some flaws.

DeevyBee commented already about them a year ago

there is a fatal flaw in the new model, which is that it still relies on editors to decide which papers go forward to review, using a method that will do nothing to reduce the tendency to hype and the consequent publication bias that ensues. I blogged about this a year ago, and suggested a simple solution, which is for the editors to adopt ‘results-blind’ review when triaging papers. This is an idea that has been around at least since 1976 (Mahoney, 1976) which has had a resurgence in popularity in recent years, with growing awareness of the dangers of publication bias (Locasio, 2017). The idea is that editorial decisions should be made based on whether the authors had identified an interesting question and whether their methods were adequate to give a definitive answer to that question.

So the idea is that the editors get the title and a modified abstract with no author names and without results.

CC-BY-NC

MDPI, Frontiers and Hindawi now being blacklisted

According to a Chinese blogger, three publishers (not journals!) are now being blacklisted

On January 3rd, Zhejiang Gonggong University, a public university in Hangzhou, announced that all the journals of the three largest Open Access (OA) publishing houses were blacklisted, including Hindawi (acquired by Wiley in early 2021), MDPI founded by a Chinese businessman Lin Shukun, and Frontiers, which has become very popular in recent years. The university issued a notice stating that articles published by Hindawi, MDPI and Frontiers will not be included in research performance statistics.

CC-BY-NC

Country analysis of PubPeer annotated articles

Just out of curiosity, after Scihub now an analysis of papers commented at the PubPeer website. Pubpeer is now also screened on a regular basis by Holden Thorp, the chief editor of Science…

Unfortunately I am loosing many records for incomplete or malformed addresses, while some preliminary conclusions can already be made when looking at my world map.

pubpeer.R grey indicates no data, black only a few, red numerous entries.

A further revision will need to include more addresses and also overall research output as a reference.

Continue reading Country analysis of PubPeer annotated articles

CC-BY-NC

Doing less

“The case for doing less in our peer reviews” by Kate Derickson is an interesting essay on scientific reviews.

While it is a luxury to receive thorough and carefully thought-out comments from a colleague, the nature of blind peer review means that the author cannot know who is making suggestions […] And yet, the author is often relying on the paper being published for professional security or advancement. This puts the author in the position of being obligated to rework their arguments according to constructive suggestions made by an anonymous person whose credibility or self-interest they cannot assess. Moreover, while reviewers often identify similar issues in a paper, they often propose a variety of different approaches to addressing them, many of which work at cross purposes. Authors can be overwhelmed by the range of suggestions, feeling obligated to split the difference and cover all the bases in case the paper goes back to all three reviewers. While papers generally get better through the review process, authors often have a difficult time navigating contradictory reviewer suggestions.

But wait, there is also a point where I do not agree (in the light of the recent elife decision).

we think carefully about what we decide to send out for peer review, in order to enable us to curate a table of contents that we think is at the cutting edge of our disciplines and of interest to our readership.

Creating the most cited journal? Creating cutting edge? This is a pre-internet 1980’s attitude of  a journal editor trying to get a higher citation impact in the competition with other journals. It simply devalues everything that Derickson does not understand or that Derickson does not want to promote.

So my initial enthusiasm of the paper finally dies with “the biggest scientific experiment

Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure

Yes, this is about the end of scholarly peer review as peer review fails  to catch major errors in about 1/3 of all papers.

In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward.

The focus is on “cutting edge” and “interest” aka impact points but  neither on ingenious minds nor brilliant discoveries.

CC-BY-NC

Academic freedom

Peer review kann auch Wissenschaft verhindern, wie wir gestern an dem Cosmos Artikel oder vor ein paar Tagen bei eLife gesehen haben.

Und es ist ein riesiges Problem, wie ich gerade in einem weiteren Essay bei Sandra Kostner gefunden habe “Disziplinieren statt argumentieren. Zur Verhängung und Umsetzung intellektueller Lockdowns” in ApuZ 71. Jahrgang, 46/2021, 15. November 2021.

Continue reading Academic freedom

CC-BY-NC

Too many complaints about eLife

Following the recent announcement of eLife to overcome a accept/reject decision

We have found that these public preprint reviews and assessments are far more effective than binary accept or reject decisions ever could be at conveying the thinking of our reviewers and editors, and capturing the nuanced, multidimensional, and often ambiguous nature of peer review.

there are now many complaints

Destroying eLife’s reputation for selectivity does not serve science. Changes that pretend scientists do not care about publishing in highly selective journals will end eLife’s crucial role in science publishing, says long-time supporter Paul Bieniasz

While the announcement could have come in a more polite way – creating a second tier of an eLife archive – I believe this is a good decision.The rejection attitude  is basically driven that “your inferior paper would harm my journal impact” while it just goes to another journal. Publication is seldom stopped so it produces workload at other journals and for other reviewers in particular when the initial reviews are not public.

The eLife decision therefore breaks a vicious circle.

 

27.11.2024

Unfortunately, eLife is now starting again to reject papers. From an email that I received this month

In this case the editorial team felt that the manuscript should be reviewed by a more specialized community. Where results are principally useful within a specialised community, then it is likely that this audience can evaluate the paper themselves, so the public reviews and assessments carry less value. We also think that in these cases more specialised journals are likely to be able to find more suitable technical reviewers than eLife.
We wish you good luck in getting your work reviewed and published by another journal.

eLife is also been delisted now, maybe it wasn’t a good idea to fire Michael Eisen.

CC-BY-NC

PubPeer should be merged into Pubmed (at some time point)

PubMed had an own comments feature “PubMed Commons” which had been shut down in 2018.

NIH announced it will be discontinuing the service — which allowed only signed comments from authors with papers indexed in PubMed, among other restrictions — after more than four years, due to a lack of interest.

But there is no lack of interest, if we look at the ever increasing rates at PubPeer – the counter today is 122.000.

The main  difference between PubMed Commons and PubPeer is the chance of submitting anonymous comments. While I also see a risk of unjustified accusations or online stalking, I believe that the current PubPeer coordinators handle this issue very well. We can post only issues that are obvious, directly visible or backed up by another source. Continue reading PubPeer should be merged into Pubmed (at some time point)

CC-BY-NC

Formal peer review may come to an end

The Absurdity of Peer Review. What the pandemic revealed about… | by Mark Humphries | Jun, 2021 | Elemental June 2021

I was reading my umpteenth news story about Covid-19 science, a story about the latest research into how to make indoor spaces safe from infection, about whether cleaning surfaces or changing the air was more important. And it was bothering me. Not because it was dull (which, of course, it was: there are precious few ways to make air filtration and air pumps edge-of-the-seat stuff). But because of the way it treated the science.
You see, much of the research it reported was in the form of pre-prints, papers shared by researchers on the internet before they are submitted to a scientific journal. And every mention of one of these pre-prints was immediately followed by the disclaimer that it had not yet been peer reviewed. As though to convey to the reader that the research therein, the research plastered all over the story, was somehow of less worth, less value, less meaning than the research in a published paper, a paper that had passed peer review.

I expect the business of scientific publishers is slowly coming to an end. Maybe others also?

We will need of course peer evaluation but maybe not in the sense that scientific publication is being suppressed by peer review of some elite journals. Some arXiv type PDF deposit plus some elife/twitter/pubpeer score would be fully sufficient. For me and maybe also for many other people in the field.

CC-BY-NC

Grant preparation costs may exceed grant given

FYI – a citation from “Accountability of Research

Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant). This means the Canadian Federal Government could institute direct grants for 100% of qualified applicants for the same money. We anticipate that the net result would be more and better research since more research would be conducted at the critical idea or discovery stage.

Will that be ever read by our governments? Nay, nay.

CC-BY-NC

Publishing on the recommendations of the head of the authors’ lab

Campbell writing at Edge about Maddox

Despite his original establishment of the peer-review process at Nature, Maddox always had strong reservations about its conservatism. These were perhaps best reflected in his view that the Watson and Crick paper on the structure of DNA wouldn’t pass muster under the current system. That paper was published as a result of recommendations by Lawrence Bragg Continue reading Publishing on the recommendations of the head of the authors’ lab

CC-BY-NC

Truthiness in science

Truthiness was the 2005 neologism in the large country somewhere over/under our horizon (depending on what horizon you are looking). Continue reading Truthiness in science

CC-BY-NC