Tag Archives: Pubpeer

Country analysis of PubPeer annotated articles

Just out of curiosity, after Scihub now an analysis of papers commented at the PubPeer website. Pubpeer is now also screened on a regular basis by Holden Thorp, the chief editor of Science…

Unfortunately I am loosing many records for incomplete or malformed addresses, while some preliminary conclusions can already be made when looking at my world map.

pubpeer.R grey indicates no data, black only a few, red numerous entries.

A further revision will need to include more addresses and also overall research output as a reference.

Continue reading Country analysis of PubPeer annotated articles


I confess, I worked together with the founder of ImageTwin some years ago, even encouraging him to found a company. I would have even been interested in a further collaboration but unfortunately the company has cut all ties (maybe except to Bik, Christopher, Cheshire…)

Should we really pay now 25€ for testing a single PDF?

price list 2023

My proposal in 2020 was to build an academic community with ImageTwin’s keypoint matching.  The recent addition of AI seems to be more a marketing buzzword, at least what is known from the basic theory behind the basic keypoint matching. AI  analysis would be a nice core function along with a more comprehensive review than just drawing boxes around duplicated image areas.

A new research paper  by new ImageTwin collaborators now finds

Duplicated images in research articles erode integrity and credibility of biomedical science. Forensic software is necessary to detect figures with inappropriately duplicated images. This analysis reveals a significant issue of inappropriate image duplication in our field.

Unfortunately the authors of this paper are missing a basic understanding of the integrity nomenclature  flagging images that are expected to look similar.   Even worse, they miss duplications as ImageTwin is notoriously bad with Western blots.

Sadly, this paper erodes the credibility of image analysis. Is ImageTwin running out of control now just like Proofig?


Oct 4, 2023

The story continues. Instead of working on a well defined data set and determining sensitivity, specificity, etc. of the ImageTwin approach, some preprint research by “sholto.david@gmail.com” (bioRxiv)  aka “Mycosphaerella arachidis” (PubPeer)  aka “ncl.ac.uk” (Scholar) shows that

Toxicology Reports published 715 papers containing relevant images, and 115 of these papers contained inappropriate duplications (16%). Screening papers with the use of ImageTwin.ai increased the number of inappropriate duplications detected, with 41 of the 115 being missed during the manual screen and subsequently detected with the aid of the software.

It is a pseudoscientific study as nobody knows “Mycosphaerella arachidis” capacity to detect image duplications. Neither can we verify what ImageTwin does as it sits now behind a paywall  (while I still maintain a collection of ImageTwin failures here).

Unfortunately, a news report by Anil Oza “AI beats human sleuth at finding problematic images in research papers” even makes it worse.  The news report is trivial at best while missing the main point of an independent study. The news report is just wrong with “working at two to three times David’s speed” (as it is 20 times faster but  giving numerous false positives)  or with “Patrick Starke, one of its developers”(Starke is a sales person not a developer).

So at the end, the Oza news report is just a PR stunt as confirmed on Twitter on the next day



PubPeer Statistics

For a forthcoming article, I need some statistics to illustrate how PubPeer performs.  AFAIK know there is only one report from 2021 so  I had to try something by my own.

PubPeer submissions until 23 April 2022

Continue reading PubPeer Statistics

PubPeer Pearls I

It’s always interesting if we can find a discussion under a PubPeer article with more than 3 comments. Elisabeth Bik collected some of these interesting #PubPeer Pearls at Twitter while I am starting a new collection here.

The longest thread that I remember is this one with 290 comments around a retracted article while my most appreciated PubPeer author Continue reading PubPeer Pearls I

PubPeer should be merged into Pubmed (at some time point)

PubMed had an own comments feature “PubMed Commons” which had been shut down in 2018.

NIH announced it will be discontinuing the service — which allowed only signed comments from authors with papers indexed in PubMed, among other restrictions — after more than four years, due to a lack of interest.

But there is no lack of interest, if we look at the ever increasing rates at PubPeer – the counter today is 122.000.

The main  difference between PubMed Commons and PubPeer is the chance of submitting anonymous comments. While I also see a risk of unjustified accusations or online stalking, I believe that the current PubPeer coordinators handle this issue very well. We can post only issues that are obvious, directly visible or backed up by another source. Continue reading PubPeer should be merged into Pubmed (at some time point)

Anonyme Wissenschaft

Gute Wissenschaft hängt nicht generell davon ab, ob sie mit einer bestimmten Person verknüpft werden kann – Sachverhalte sollte ja eigentlich objektiv reproduzierbar sein (auch wenn Wissenschaftsfunktionäre das anders sehen).

Interessant ist jedenfalls was  ein neues Journal nun erlaubt: die anonyme Veröffentlichung.

Cut to 180 years later, and philosophers are again asserting the right to publish under made-up names. But these philosophers, it seems, want to use pseudonyms to do the very thing Kierkegaard accused his contemporaries of doing: abstracting authors out of ethical reality … An “open access, peer-reviewed, interdisciplinary journal specifically created to promote free inquiry on controversial topics,” it will give authors the option to publish their work under a pseudonym “in order to protect themselves from threats to their careers or physical safety.”

In der Tat, nicht alle wissenschaftliche Erkenntnis kann unter gegebenen politischen Verhältnissen veröffentlicht werden. Es wird jedenfalls spannend werden

Pseudonymity is not an inherently bad thing. Apart from focusing the reader on the argument rather than the author, it can, in many cases, give a say to people who could otherwise not participate in public discourse.

A cloud hangs over the paper in question

The title is taken from an essay of Dan Bolnick about current PubPeer practices

That said, there is some question about the proper procedure for answering these criticisms. Yes, PubPeer itself leaves room for comments (interestingly, journal editors like myself must pay money to reply to comments, even if to acknowledge them and state we are evaluating the issue). But, this process bypasses the journal that publishes the paper, and bypasses the normal scientific tradition of external review by experts in the field chosen by the journal editor for their knowledge and hopefully objectivity. For this reason, I want to really encourage people with substantial concerns about a paper (e.g., which may appreciably alter the results and conclusions), to submit formal “Comments” (different journals call these different things) to the journal.

I agree, letters or comments would be preferable. In practice, however, comments are largely ignored. I can provide numerous examples where nothing happened.

But, scientific traditions are fluid and we are in an era of increasing speed and openness: Preprint servers, open peer review, open data, et cetera. We therefore also recognize that PubPeer is an active tool in science conversations. The criticisms posted there can be valid identification of genuine problems that need to be evaluated formally and corrected. If valid well-justified and substantial concerns exist and are published on PubPeer, then the affected journal should respond.

Yea, yea. And are there caveats? Bolnick thinks

it is important that these not be used as a mechanism for pursuing personal vendettas. Excessive targeting of an author with multiple minor complaints can constitute a kind of harassment, and may be viewed as such by University Equity officers or equivalent. The anonymous nature of many PubPeer comments makes it easier for impacted authors to feel like (and, argue that) they are the target of personal vendettas and harassment. Second, the existence of PubPeer comments can cast a long shadow over a paper whether the comments are profound or minor. This shadow can affect an author’s career prospects (fellowship applications, job applications, etc) even before the matter is resolved and judged to be valid or not. The result can be inappropriate damage to an innocent authors’ career, which in turn may have grave consequences for mental health. Third, there is an established mechanism for voicing complaints about papers in science: Contact the author to request clarification, or contact the Editor, or submit a Comment.

Well, it depends. What is a minor complaint? One wrong label, two or three? Mixing up figures and data? Nirwana references? Intentional wrong statements? Ignorance of literature? If a comment is profound or minor can be decided by any PubPeer reader. Grave consequences for mental health of a fraudulent or careless author?

I believe that if there would be any working mechanism for voicing complaints about scientific integrity (or just minor corrections), PubPeer would not exist. Kudos Brandon Stell.