Tag Archives: peer review

MDPI, Frontiers and Hindawi now being blacklisted

According to a Chinese blogger, three publishers (not journals!) are now being blacklisted

On January 3rd, Zhejiang Gonggong University, a public university in Hangzhou, announced that all the journals of the three largest Open Access (OA) publishing houses were blacklisted, including Hindawi (acquired by Wiley in early 2021), MDPI founded by a Chinese businessman Lin Shukun, and Frontiers, which has become very popular in recent years. The university issued a notice stating that articles published by Hindawi, MDPI and Frontiers will not be included in research performance statistics.

Country analysis of PubPeer annotated articles

Just out of curiosity, after Scihub now an analysis of papers commented at the PubPeer website. Pubpeer is now also screened on a regular basis by Holden Thorp, the chief editor of Science…

Unfortunately I am loosing many records for incomplete or malformed addresses, while some preliminary conclusions can already be made when looking at my world map.

pubpeer.R grey indicates no data, black only a few, red numerous entries.

A further revision will need to include more addresses and also overall research output as a reference.

Continue reading Country analysis of PubPeer annotated articles

Doing less

“The case for doing less in our peer reviews” by Kate Derickson is an interesting essay on scientific reviews.

While it is a luxury to receive thorough and carefully thought-out comments from a colleague, the nature of blind peer review means that the author cannot know who is making suggestions […] And yet, the author is often relying on the paper being published for professional security or advancement. This puts the author in the position of being obligated to rework their arguments according to constructive suggestions made by an anonymous person whose credibility or self-interest they cannot assess. Moreover, while reviewers often identify similar issues in a paper, they often propose a variety of different approaches to addressing them, many of which work at cross purposes. Authors can be overwhelmed by the range of suggestions, feeling obligated to split the difference and cover all the bases in case the paper goes back to all three reviewers. While papers generally get better through the review process, authors often have a difficult time navigating contradictory reviewer suggestions.

But wait, there is also a point where I do not agree (in the light of the recent elife decision).

we think carefully about what we decide to send out for peer review, in order to enable us to curate a table of contents that we think is at the cutting edge of our disciplines and of interest to our readership.

Creating the most cited journal? Creating cutting edge? This is a pre-internet 1980’s attitude of  a journal editor trying to get a higher citation impact in the competition with other journals. It simply devalues everything that Derickson does not understand or that Derickson does not want to promote.

So my initial enthusiasm of the paper finally dies with “the biggest scientific experiment

Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure

Yes, this is about the end of scholarly peer review as peer review fails  to catch major errors in about 1/3 of all papers.

In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward.

The focus is on “cutting edge” and “interest” aka impact points but  neither on ingenious minds nor brilliant discoveries.

Academic freedom

Peer review kann auch Wissenschaft verhindern, wie wir gestern an dem Cosmos Artikel oder vor ein paar Tagen bei eLife gesehen haben.

Und es ist ein riesiges Problem, wie ich gerade in einem weiteren Essay bei Sandra Kostner gefunden habe “Disziplinieren statt argumentieren. Zur Verhängung und Umsetzung intellektueller Lockdowns” in ApuZ 71. Jahrgang, 46/2021, 15. November 2021.

Continue reading Academic freedom

Too many complaints about eLife

Following the recent announcement of eLife to overcome a accept/reject decision

We have found that these public preprint reviews and assessments are far more effective than binary accept or reject decisions ever could be at conveying the thinking of our reviewers and editors, and capturing the nuanced, multidimensional, and often ambiguous nature of peer review.

there are now many complaints

Destroying eLife’s reputation for selectivity does not serve science. Changes that pretend scientists do not care about publishing in highly selective journals will end eLife’s crucial role in science publishing, says long-time supporter Paul Bieniasz

While the announcement could have come in a more polite way – creating a second tier of an eLife archive – I believe this is a good decision.The rejection attitude  is basically driven that “your inferior paper would harm my journal impact” while it just goes to another journal. Publication is seldom stopped so it produces workload at other journals and for other reviewers in particular when the initial reviews are not public.

The eLife decision therefore breaks a vicious circle.

PubPeer should be merged into Pubmed (at some time point)

PubMed had an own comments feature “PubMed Commons” which had been shut down in 2018.

NIH announced it will be discontinuing the service — which allowed only signed comments from authors with papers indexed in PubMed, among other restrictions — after more than four years, due to a lack of interest.

But there is no lack of interest, if we look at the ever increasing rates at PubPeer – the counter today is 122.000.

The main  difference between PubMed Commons and PubPeer is the chance of submitting anonymous comments. While I also see a risk of unjustified accusations or online stalking, I believe that the current PubPeer coordinators handle this issue very well. We can post only issues that are obvious, directly visible or backed up by another source. Continue reading PubPeer should be merged into Pubmed (at some time point)

Formal peer review may come to an end

The Absurdity of Peer Review. What the pandemic revealed about… | by Mark Humphries | Jun, 2021 | Elemental June 2021

I was reading my umpteenth news story about Covid-19 science, a story about the latest research into how to make indoor spaces safe from infection, about whether cleaning surfaces or changing the air was more important. And it was bothering me. Not because it was dull (which, of course, it was: there are precious few ways to make air filtration and air pumps edge-of-the-seat stuff). But because of the way it treated the science.
You see, much of the research it reported was in the form of pre-prints, papers shared by researchers on the internet before they are submitted to a scientific journal. And every mention of one of these pre-prints was immediately followed by the disclaimer that it had not yet been peer reviewed. As though to convey to the reader that the research therein, the research plastered all over the story, was somehow of less worth, less value, less meaning than the research in a published paper, a paper that had passed peer review.

I expect the business of scientific publishers is slowly coming to an end. Maybe others also?

https://twitter.com/OdedRechavi/status/1454834378845167618

We will need of course peer evaluation but maybe not in the sense that scientific publication is being suppressed by peer review of some elite journals. Some arXiv type PDF deposit plus some elife/twitter/pubpeer score would be fully sufficient. For me and maybe also for many other people in the field.

Grant preparation costs may exceed grant given

FYI – a citation from “Accountability of Research

Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant). This means the Canadian Federal Government could institute direct grants for 100% of qualified applicants for the same money. We anticipate that the net result would be more and better research since more research would be conducted at the critical idea or discovery stage.

Will that be ever read by our governments? Nay, nay.

Publishing on the recommendations of the head of the authors’ lab

Campbell writing at Edge about Maddox

Despite his original establishment of the peer-review process at Nature, Maddox always had strong reservations about its conservatism. These were perhaps best reflected in his view that the Watson and Crick paper on the structure of DNA wouldn’t pass muster under the current system. That paper was published as a result of recommendations by Lawrence Bragg Continue reading Publishing on the recommendations of the head of the authors’ lab

Truthiness in science

Truthiness was the 2005 neologism in the large country somewhere over/under our horizon (depending on what horizon you are looking). Continue reading Truthiness in science