Tag Archives: ai

Death by AI

spiegel.de reports a fatal accident of a self driving car.

In Kurve auf Gegenfahrbahn geraten
Ein Toter und neun Schwerverletzte bei Unfall mit Testfahrzeug
Vier Rettungshubschrauber und 80 Feuerwehrleute waren im Einsatz: Bei einem Unfall auf der B28 im Kreis Reutlingen starb ein junger Mann, mehrere Menschen kamen schwer verletzt ins Krankenhaus.

Is there any registry of these kind of accidents?

https://twitter.com/ISusmelj/status/1558912252119482368

and the discussion on responsibility

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3, which the driver claims was in “autopilot” mode.
In the US, the highway safety regulator is investigating a series of accidents where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

Big Data Paradox: quality beats quantity

/www.nature.com/articles/s41586-021-04198-4 (via @emollick)

Surveys are a crucial tool for understanding public opinion and behaviour, and their accuracy depends on maintaining statistical representativeness of their target populations by minimizing biases from all sources. Increasing data size shrinks confidence intervals but magnifies the effect of survey bias: an instance of the Big Data Paradox … We show how a survey of 250,000 respondents can produce an estimate of the population mean that is no more accurate than an estimate from a simple random sample of size 10

It basically confirms my earlier observation in asthma genetics

this result was possible with just 415 individuals instead of 500,000 individuals nowadays

It is only Monday but already depressing

Comment on the Palm paper by u/Flaky_Suit_8665 via @hardmaru

67 authors, 83 pages, 5408 parameters in a model, the internals of which no one can say they comprehend with a straight face, 6144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data that a human couldn’t process in a lifetime, 1 page on ethics with the same ideas that have been rehashed over and over elsewhere with no attempt at a solution – bias, racism, malicious use, etc. – for purposes that who asked for?

(replication crisis)^2

We always laughed at the papers  in the “Journal of Irreproducible Results”

https://www.thriftbooks.com/w/the-best-of-the-journal-of-irreproducible-results/473440/item/276126/?gclid=EAIaIQobChMI3NnCm72l-QIVpHNvBB1nIwSWEAQYAiABEgK6__D_BwE#idiq=276126&edition=1874246

 

then we had the replication crisis and nobody laughed anymore.

 

And today? It seems that irreproducible research is set to reach a new height. Elizabeth Gibney discusses an arXiv paper by Sayash Kapoor and Arvind Narayanan basically saying that

reviewers do not have the time to scrutinize these models, so academia currently lacks mechanisms to root out irreproducible papers, he says. Kapoor and his co-author Arvind Narayanan created guidelines for scientists to avoid such pitfalls, including an explicit checklist to submit with each paper … The failures are not the fault of any individual researcher, he adds. Instead, a combination of hype around AI and inadequate checks and balances is to blame.

Algorithms being stuck on shortcuts that don’t always hold has been discussed here earlier . Also data leakage (good old confounding) due to proxy variables seems to be also a common issue.

More about the AI winter

towardsdatascience.com

In the deep learning community, it is common to retrospectively blame Minsky and Papert for the onset of the first ‘AI Winter,’ which made neural networks fall out of fashion for over a decade. A typical narrative mentions the ‘XOR Affair,’ a proof that perceptrons were unable to learn even very simple logical functions as evidence of their poor expressive power. Some sources even add a pinch of drama recalling that Rosenblatt and Minsky went to the same school and even alleging that Rosenblatt’s premature death in a boating accident in 1971 was a suicide in the aftermath of the criticism of his work by colleagues.

 

Allergy research – waste of time?

A waste of time – has been said about other fields but applies to allergy research also when reading the review request of “Allergy” today. I have to keep the content confidential but not the comment of  AI expert Jeremy Howard

It’s a problem in science in general. Scientists need to be published which means they need to work on things that their peers are extremely familiar with and can recognize an advance in that area. So, that means that they all need to work on the same thing. The thing they work on… there’s nothing to encourage them to work on things that are practically useful so you get just a whole lot of research which is minor advances and stuff that’s been very highly studied and has no significant practical impact.

Deep fake image fraud

Doing now another image integrity study, I fear that we may already have the deep fake images in current scientific papers. Never spotted any in the wild which doesn’t mean that it does not exist…

Here are some T cells that I produced this morning.

https://huggingface.co/spaces/dalle-mini/dalle-mini

Continue reading Deep fake image fraud

Responsibility for algorithms

Excellent paper  at towardsdatascience.com about the responsibility for algorithms including a

broad framework for involving citizens to enable the responsible design, development, and deployment of algorithmic decision-making systems. This framework aims to challenge the current status quo where civil society is in the dark about risky ADS.

I think that the responsiblity is not primarily with the developer but with the user and the social and political framework ( SPON has a warning about the numerous crazy errors when letting AI decide about human behaviour while I can also recommend here the “Weapons of Math Destruction” ).

Being now in the 3rd wave of machine learning, the question is now already discussed (Economist & Washington Post) if AI has an own personality.

 

 

The dialogue sounds slightly better than ELIZA but again way off.

We clearly need to regulate that gold rush to avoid further car crashes like this one in China and this one in France.

Another problem in AI: Out-of-distribution generalization

Not sure if it is really the biggest but certainly one of the most pressing problems: Out-of-distribution generalization. It is explained as

Imagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass? In that case, we have a spurious correlation between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector.

As an epidemiologist I would have simply said, it is colliding or confounding, so every new field is rediscovering the same problems over and over again.

 

 

Not unexpected AI just running randomly over pixels is leading to spurious association. Once shape and colour of cows has been detected, surrounding environment, like grass or stable is irrelevant. That means that after getting initial results we have to step back, simulate different lighting conditions from sunlight to lightbulb and environmental conditions from grass to slatted floor (invariance principle). Also shape and size matters – cow spots will keep to some extent size and form irrespective if it is a real animal or children toy (scaling principle). I am a bit more sceptical about including also multimodal data (eg smacking sound) as the absence of these features is no proof of non-existence while this sound can also be imitated by other animals.

And yes, less is more.

Deep fake science

I am currently working on a lecture how I make up my mind whenever approaching a new scientific field. Of course we get the first orientation by proven experts, by proven journals and textbooks, then we collect randomly additional material just to increase confidence.

But what happens if there is some deep fake science? A new Forbes article highlights now deep fakes, how they are going to wreak havoc on society and “we are not prepared”

While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long. “In January 2019, deep fakes were buggy and flickery,” said Hany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”

It means that also empirical science can be manipulated as well which will be even hard to detect.

 

Is AI able to classify supporting and contradicting references?

To answer that question, I am just examining one of my own papers in which I am contradicting most earlier research.

Unfortunately, the scite results are disappointing, if not to say useless …

https://scite.ai/reports/10.1111/j.1399-3038.2006.00456.x 21/3/20

Die Korrelationsmanie

Materialsammlung bioinformatics / big data / deep learning / AI

 

Passend dazu auch der CCC Vortrag Nadja Geisler / Benjamin Hättasch am 28.12.2019

Deep Learning ist von einem Dead End zur ultimativen Lösung aller Machine Learning Probleme geworden. Die Sinnhaftigkeit und die Qualität der Lösung scheinen dabei jedoch immer mehr vom Buzzword Bingo verschluckt zu werden.
Ist es sinnvoll, weiterhin auf alle Probleme Deep Learning zu werfen? Wie gut ist sind diese Ansätze wirklich? Was könnte alles passieren, wenn wir so weiter machen? Und können diese Ansätze uns helfen, nachhaltiger zu leben? Oder befeuern sie die Erwärmung des Planetens nur weiter?

 

Dazu der gigantische Energieverbrauch durch die Rechenleistung.

 

Wozu es führt: lauter sinnlose Korrelationen

 

https://www.technologyreview.com

Hundreds of AI tools have been built to catch covid. None of them helped.