Tag Archives: bias

elife delisted

Elife is one of the most interesting scientific journals with a full history at Wikipedia.  The Elife board  introduced  in 2022 that

From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprint

with the unfortunate but foreseeable consequence that Elife now does not get anymore an impact factor

Clarivate, the provider of the Web of Science platform, said it would not provide impact factors to journals that publish papers not recommended for publication by reviewers.

I don’t care about impact factors. I also do not care about Clarivate or any other Private-Equity-company as we don’t need this kind of business in science. Elife however will loose it’s value in particular as system still has some flaws.

DeevyBee commented already about them a year ago

there is a fatal flaw in the new model, which is that it still relies on editors to decide which papers go forward to review, using a method that will do nothing to reduce the tendency to hype and the consequent publication bias that ensues. I blogged about this a year ago, and suggested a simple solution, which is for the editors to adopt ‘results-blind’ review when triaging papers. This is an idea that has been around at least since 1976 (Mahoney, 1976) which has had a resurgence in popularity in recent years, with growing awareness of the dangers of publication bias (Locasio, 2017). The idea is that editorial decisions should be made based on whether the authors had identified an interesting question and whether their methods were adequate to give a definitive answer to that question.

So the idea is that the editors get the title and a modified abstract with no author names and without results.


CC-BY-NC

Es ist ein ethisches, nicht ein mathematisches Problem

Ein hervorragender Insider Kommentar zu Gebru und Mitchell, der Frau die von Google wegen ihres AI Papers gefeuert wurde.

https://twitter.com/mmitchell_ai/status/1362885356127801345

Michaela Menken schreibt dazu

Bias in/bias out. Sprich, die gelernten Modelle verstärken die Positionen und Stimmen, die in den Trainingsdaten am meisten vertreten sind. Und das sind selten die Positionen und Stimmen von Minderheiten …

Reporting Bias … implizite, ontologische Kenntnis der Welt steht der KI nicht vollumfänglich zur Verfügung…

Selection Bias also … die durch die Auswahl der Daten entstehen können…

Confirmation Bias …  ob der Output für qualitativ gut oder schlecht gehalten wird, korrespondiert natürlich mit dem Weltbild und den Erwartungen desjenigen, der die Überprüfung durchführt.

Automation Bias. Menschen neigen dazu, Ergebnisse, die algorithmisch herbeigeführt wurden, schneller und leichter anzunehmen

Es sind also massive ethische Probleme, die mit dem Einzug der KI entstehen. Nicht umsonst wurde in Augsburg im letzten Jahr ein “Center for Responsible AI Technologies” gegründet.  Dazu wurden 100 neue Lehrstühle geschaffen, selbst der bayrische Ethikrat hat sich vor 2 Wochen einschlägig wenn auch wenig konkret dazu geäussert.


CC-BY-NC

Give us our daily bias

When working today about COVID-19 mortality, I was falling back to the survivorship bias that is nice illustrated at Wikipedia and which is just another type of selection bias that I explained in my last talk.

During World War II, the statistician Abraham Wald took survivorship bias into his calculations when considering how to minimize bomber losses to enemy fire.The Statistical Research Group (SRG) at Columbia University, which Wald was a part of, examined the damage done to aircraft that had returned from missions and recommended adding armor to the areas that showed the least damage, based on his reasoning. This contradicted the US military’s conclusions that the most-hit areas of the plane needed additional armor.


CC-BY-NC

Cognitive bias codex

I am sure, I wrote or talked about it before, but cannot find it.

Maybe all mission critical hypothesis should undergo a bias check?

So here again –  a link to the cognitive bias codex plot of Terry Heick from https://www.teachthought.com (that is itself based on Wikipedia).


CC-BY-NC

A biased scientific result is no different from a useless one

is a quote from a recent Nature column “Beware the creeping cracks of bias”. A great article, that summarizes why the impact oriented biomedical science is at risk producing meaningless and useless results: Continue reading A biased scientific result is no different from a useless one


CC-BY-NC

Clearly biased: Maternal recall of asthma in the family

I have heard it many times on congresses and there seems now even a meta-analysis of a possible preferential maternal transmission of asthma to children. And of course, there are important biological question behind (imprinting? maternal antibody transfer?) but unfortunately this is nothing else than a spurious effect.
The author’s view is well taken that we did the first modern family study 1992 Continue reading Clearly biased: Maternal recall of asthma in the family


CC-BY-NC

When controls are no controls

So far in epidemiology case – control studies are defined by an approach where

… the past histories of patients (the cases) suffering from the condition of interest are compared to the past histories of persons (the controls) who do not have the condition of interest, but who otherwise resemble the cases in such particulars as age and sex ….

I usually explain controls as non-cases in the same overall environment Continue reading When controls are no controls


CC-BY-NC