Verlust der Glaubwürdigkeit

Der Verlust der Glaubwürdigkeit ist das Schlimmste was einem einzelnen Wissenschaftler aber auch einer Kommission passieren kann. Das passierte der COVID19 Sachverständigenkommission (als Drosten ging und Stöhr kam). Leider passiert das auch bei der Stiko mit Mertens et al.

Leider hat das jetzt fatale Folgen, so die ersten Berichte aus der Praxis. Impfungen werden nun generell schwieriger, nicht nur bei “COVID22” sondern bei ALLEN Impfungen, obwohl deren Nutzen-Risiken-Relation unverändert ist.

Hygiene hypothesis hyperbole

Having written about the hygiene hypothesis, I missed a PNAS News feature even some years ago.

Again: The hygiene hypothesis was not born by David Strachan and well, there are more cracks of the hygiene hypothesis.

“The trouble is, as soon as you use the words ‘hygiene hypothesis,’ the word hygiene prejudges what the cause is,” says Bloomfield. To the public, “hygiene” is interpreted as personal cleanliness: washing hands, keeping food clean and fresh, sanitizing the home. However, because the hypothesis has been largely uncoupled from infections, the idea that we need to be less hygienic is wrong. Relaxing hygiene standards would not reverse the trend but only serve to increase the risks of infectious disease, says Bloomfield. The term “hygiene hypothesis” also fails to incorporate all of the other factors now linked to the increase in immunoregulatory diseases.

I expect that five years after mandating “super hygiene” during COVID-19 we can finally bury the hygiene hypothesis.

Big Data Paradox: quality beats quantity

/ (via @emollick)

Surveys are a crucial tool for understanding public opinion and behaviour, and their accuracy depends on maintaining statistical representativeness of their target populations by minimizing biases from all sources. Increasing data size shrinks confidence intervals but magnifies the effect of survey bias: an instance of the Big Data Paradox … We show how a survey of 250,000 respondents can produce an estimate of the population mean that is no more accurate than an estimate from a simple random sample of size 10

It basically confirms my earlier observation in asthma genetics

this result was possible with just 415 individuals instead of 500,000 individuals nowadays

Not too bad: Citation Gecko

Just tried on a topic that I have been working on for 2 decades. It will find rather quickly the source  paper, much faster than reading through all of it. Unfortunately reviews are rated to be more influential than original data as Citation Gecko picks articles with many references.

It is only Monday but already depressing

Comment on the Palm paper by u/Flaky_Suit_8665 via @hardmaru

67 authors, 83 pages, 5408 parameters in a model, the internals of which no one can say they comprehend with a straight face, 6144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data that a human couldn’t process in a lifetime, 1 page on ethics with the same ideas that have been rehashed over and over elsewhere with no attempt at a solution – bias, racism, malicious use, etc. – for purposes that who asked for?

Climate endgame?

Prudent risk management requires consideration of bad-to-worst-case scenarios. Yet, for climate change, such potential futures are poorly understood. Could anthropogenic climate change result in worldwide societal collapse or even eventual human extinction? At present, this is a dangerously underexplored topic.



50 year anniversary: More is different

“Intensive research goes for fundamental laws … there is always much less intensive research going on”.

hinted by @spornslab

I would also like to apply for the Elsevier bug bounty program

a new proposal by Ivan Oransky

Retractions must be supported as an essential part of healthy science. Sleuths should be compensated and given access to tools to improve the hunt for errors and fraud — not face ridicule, harassment and legal action. Publishers could create a cash pool to pay them, similar to the ‘bug bounties’ that reward hackers who detect flaws in computer security systems. At the same time, institutions should appropriately assess researchers who honestly aim to correct the record. Retractions should not be career killers — those correcting honest errors should be celebrated.

(replication crisis)^2

We always laughed at the papers  in the “Journal of Irreproducible Results”


then we had the replication crisis and nobody laughed anymore.


And today? It seems that irreproducible research is set to reach a new height. Elizabeth Gibney discusses an arXiv paper by Sayash Kapoor and Arvind Narayanan basically saying that

reviewers do not have the time to scrutinize these models, so academia currently lacks mechanisms to root out irreproducible papers, he says. Kapoor and his co-author Arvind Narayanan created guidelines for scientists to avoid such pitfalls, including an explicit checklist to submit with each paper … The failures are not the fault of any individual researcher, he adds. Instead, a combination of hype around AI and inadequate checks and balances is to blame.

Algorithms being stuck on shortcuts that don’t always hold has been discussed here earlier . Also data leakage (good old confounding) due to proxy variables seems to be also a common issue.

This error does not affect the results or conclusions

Please see also an earlier comment on image duplications: While results do not change if authors are repeating images of the same object without notice, this is not providing independent evidence and  therefore affecting the conclusions.

JACI – retractions overdue

JACI is the journal with the poorest experience  that I ever encountered as an author and  as a reviewer.  The editors never adequately responded to numerous errors in an earlier paper where I sent a long letter describing all details.

And it is a nightmare – even now with more than 100 corrigenda in this journal – as the editorial office  even modified correctly submitted images.  Yes, the JACI editor published also falsified data.

Only recently I also found another strange retraction note

The Publisher regrets that this article is an accidental duplication of an article that has already been published in J Allergy Clin Immunol

while the link of this retraction note goes to which is, however, a different paper.

It seem that the journal already lost the overview…

Parental allergy history at farms

A recent paper on the bias of farming studies did not discuss a  healthy worker although this is being a reasonable explanation.

So let’s have a more detailed look here at farm parents. It is an important question if parents are also “protected” (or if some affected parents just moved away).

AFAIK there are 7 studies dealing with this question. Continue reading Parental allergy history at farms