The Bill Gates problem

The Bill Gates problem – billionaire philanthropists investing only in their own interests – is a real problem

Similarly restricted views exist in other areas, too. In the energy sector, for instance, Gates flouts comparative performance trends to back exorbitantly expensive nuclear power instead of much more affordable, reliable and rapidly improving renewable sources and energy storage. In agriculture, grants tend to support corporate-controlled gene-modification programs instead of promoting farmer-driven ecological farming, the use of open-source seeds or land reform. African expertise in many locally adapted staples is sidelined in favour of a few supposedly optimized transnational commodity crops.

On the hand, billionaires do not pay tax – which is adding even more weight to the Nature commentary. But what are the alternatives “tax the rich“? One remarkable woman is now showing how this could work – Marlene Engelhorn

Marlene Engelhorn, who is 31 and lives in Vienna, wants 50 Austrians to determine how €25m (£21.5m) of her inheritance should be redistributed. “I have inherited a fortune, and therefore power, without having done anything for it,” she said.
“And the state doesn’t even want taxes on it.”

Politische Meinung: Umso überzeugter umso geringer das Wissen

Epidemiologie hat eher wenig mit Politik zu tun, obwohl politische Überzeugungen unstrittig mit den Lebensumständen zusammenhängen. Um so mehr war ich doch überrascht, wie sehr die individuelle politische Einstellung bei COVID-19 die Infektionsraten und damit auch die Mortalität beeinflusst hat – siehe unsere Studie in ZRex, die es vor 3 Tagen nun sogar in den Bundestag geschafft hat.

Überrascht bin ich nun auch von einer neuen Studie in Sci Rep die politisches bzw historisches Wissen mit politischer Ausrichtung in Zusammenhang bringt.

Contrary to the dominant perspective, we found no evidence that people at the political extremes are the most knowledgeable about politics. Rather, the most common pattern was a fourth- degree polynomial association in which those who are moderately left-wing and right-wing are more knowledgeable than people at the extremes and center of the political spectrum.

Je extremer die Überzeugung um so weniger Ahnung? Das stimmt nur begrenzt für Deutschland obwohl es ein neuer SZ Artikel so vermuten lässt

Am besten informiert waren jene, die moderat nach links oder rechts tendierten. Ganz in der Mitte des politischen Flusses beobachteten die Forscher eine kleine Untiefe, auch hier war das Wissen eher flach.

Damit ist die arme Grafik des Artikels überinterpretiert.

Die Unterschiede sind allenfalls grenzwertig auf 0.05 Niveau signifikant, wobei auch fraglich ist ob denn die 0.05 Punkte Wissenszuwachs überhaupt relevant sind.

In anderen Ländern sieht die Situation allerdings komplett anders aus…

Pixel metrics in image analysis

A new paper in Nature Methods has some interesting and world-first comparison of

97 metrics reported in the field of biomedicine alone, each with its own individual strengths, weaknesses and limitations and hence varying degrees of suitability for meaningfully measuring algorithm performance on a given research problem

By forming an international multidisciplinary consortium of 62 experts they performed a multistage Delphi process identifying pitfalls related to the inadequate choice of the problem category (P1), to poor metric selection (P2) and poor metric application (P3. Here is one P1 example of this highly recommended paper.

The pixel metrics are github while the code from the paper is also online. And do not miss the sister publication  by Maier-Hein L. et al. “Metrics reloaded: recommendations for image analysis validation” also in Nat. Methods 2014.

Review mills

It is hard to believe – but after research paper mills there are now also review mills

What I eventually found was a Review Mill, a set of 85 very similar review reports in 23 journals published by MDPI (Agronomy, Antibiotics, Applied Sciences, Atoms, Biomimetics, Biomolecules, Cancers, Catalysts, Chemistry, Coatings, Electronics, International Journal of Molecular Sciences, Journal of Clinical Medicine, Journal of Personalized Medicine, Materials, Metals, Molecules, Nutrients, Pathogens, Polymers, Prothesis, Sensors and Water) from August 2022 to October 2023, most of the time with coercive citation, that is, asking authors to “cite recently published articles” which were always co-authored by one or more reviewers of the Review Mill.

Parallelized computer code and DNA transcription

At stackexchange there is a super interesting discussion on parallelized computer code and DNA transcription (which is different to the DNA-based molecular programming literature…)

IF : Transcriptional activator; when present a gene will be transcribed. In general there is no termination of events unless the signal is gone; the program ends only with the death of the cell. So the IF statement is always a part of a loop.

WHILE : Transcriptional repressor; gene will be transcribed until repressor is not present.

FUNCTION: There are no equivalents of function calls. All events happen is the same space and there is always a likelihood of interference. One can argue that organelles can act as a compartment that may have a function like properties but they are highly complex and are not just some kind of input-output devices.

GOTO is always dependent on a condition. This can happen in case of certain network connections such as feedforward loops and branched pathways. For example if there is a signalling pathway like this: A → B → C and there is another connection D → C then if somehow D is activated it will directly affect C, making A and B dispensable.

Of course these are completely different concepts. I fully agree with the further stackexchange discussion that

it is the underlying logic that is important and not the statement construct itself and these examples should not be taken as absolute analogies. It is also to be noted that DNA is just a set of instructions and not really a fully functional entity … However, even being just a code it is comparable to a HLL [high level language] code that has to be compiled to execute its functions. See this post too.

Please forget everything you read from Francis Collins about this.

When AI results cannot be generalized

There is a new Science paper that shows

A central promise of artificial intelligence (AI) in healthcare is that large datasets can be mined to predict and identify the best course of care for future patients.  … Chekroud et al. showed that machine learning models routinely achieve perfect performance in one dataset even when that dataset is a large international multisite clinical trial … However, when that exact model was tested in truly independent clinical trials, performance fell to chance levels.

This study predicted antipsychotic medication effects for schizophrenia – admittedly not a trivial task due to high individual variability (as there are no extensive pharmacogenetics studies behind). But why did it completely fail? The authors highlight two major points in the introduction and detail three in the discussion

  • models may overfit the data by fitting the random noise of one particular dataset rather than a true signal
  • poor model transportability is expected due to patients, providers, or implementation characteristics that vary across trials
  • in particular patient groups that are too different across trials while this heterogeneity is not covered in the model
  • missing outcomes and covariates like psychosocial information and social determinants of health were not recorded in all studies
  • patient outcomes may be too context-dependent where trials may have subtly important differences in recruiting procedures, inclusion criteria and/or treatment protocols

So are we left now without any clue?

I remember another example of Gigerenzer in  “Click” showing misclassification of chest X rays due to different devices (mobile or stationary) which associates with more or less serious cases (page 128 refers to Zech et al.).  So we need to know the relevant co-factors first.

There is even a first understanding of the black box data shuffling in the neuronal net.  Using LRP  (Layer-wise Relevance Propagation) the recognition by weighting the characteristics of the input data can already be visualized as a heatmap.

Es geht letztendlich nicht darum welche Meinung andere haben

sondern wie es sich mit der Wahrheit verhällt – so Thomas von Aquin

Quia studium philosophiae non est ad hoc quod sciaturquid homines senserint sed qualiter se habeat veritas rerum.

De caelo et mundo, lib. I, lect. 22, nr. 228

 

Data voids and search engines

An interesting Nature editorial reporting a recent study

A study in Nature last month highlights a previously underappreciated aspect of this phenomenon: the existence of data voids, information spaces that lack evidence, into which people searching to check the accuracy of controversial topics can easily fall…
Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy…
Google does not manually remove content, or de-rank a search result; nor does it moderate or edit content, in the way that social-media sites and publishers do.

So what could be done?

There’s also a body of literature on improving media literacy — including suggestions on more, or better education on discriminating between different sources in search results.

Sure increasing media literacy at the consumer site would be helpful. But letting Google earn all that money without any further curation efforts? The original study found

Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them.

So why not putting out red flags? Or de-rank search results?

fake screen shot

 

Das Ende der Bachelorarbeit

ist wohl schon eingeleitet zumindest bei der Betriebswirtschaft in Prag, Zitat

Texte, die mit Künstlicher Intelligenz verfasst wurden sind kaum von menschlichen zu unterscheiden. Eine Prüfung sei für Unis deshalb nur sehr schwer möglich, sagt Dekan Hnilica.  “Wir haben andere Teile unseres Studiums, in denen die Studierenden ihre Lernergebnisse oder erwarteten Lernergebnisse nachweisen können. Daher ist die Bachelorarbeit überflüssig.”