Tag Archives: ai

Allergy research – waste of time

A waste of time – has been said about other fields but applies to allergy research also when reading the review request of “Allergy” today. I have to keep the content confidential but not the comment of  AI expert Jeremy Howard

It’s a problem in science in general. Scientists need to be published which means they need to work on things that their peers are extremely familiar with and can recognize an advance in that area. So, that means that they all need to work on the same thing. The thing they work on… there’s nothing to encourage them to work on things that are practically useful so you get just a whole lot of research which is minor advances and stuff that’s been very highly studied and has no significant practical impact.

Responsibility for algorithms

Excellent paper  at towardsdatascience.com about the responsibility for algorithms including a

broad framework for involving citizens to enable the responsible design, development, and deployment of algorithmic decision-making systems. This framework aims to challenge the current status quo where civil society is in the dark about risky ADS.

I think that the responsiblity is not primarily with the developer but with the user and the social and political framework ( SPON has a warning about the numerous crazy errors when letting AI decide about human behaviour while I can also recommend here the “Weapons of Math Destruction” ).

Being now in the 3rd wave of machine learning, the question is now already discussed (Economist & Washington Post) if AI has an own personality.

 

 

The dialogue sounds slightly better than ELIZA but again way off. We clearly need to regulate that gold rush…

Another problem in AI: Out-of-distribution generalization

Not sure if it is really the biggest but certainly one of the most pressing problems: Out-of-distribution generalization. It is explained as

Imagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass? In that case, we have a spurious correlation between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector.

As an epidemiologist I would have simply said, it is colliding or confounding, so every new field is rediscovering the same problems over and over again.

 

 

Not unexpected AI just running randomly over pixels is leading to spurious association. Once shape and colour of cows has been detected, surrounding environment, like grass or stable is irrelevant. That means that after getting initial results we have to step back, simulate different lighting conditions from sunlight to lightbulb and environmental conditions from grass to slatted floor (invariance principle). Also shape and size matters – cow spots will keep to some extent size and form irrespective if it is a real animal or children toy (scaling principle). I am a bit more sceptical about including also multimodal data (eg smacking sound) as the absence of these features is no proof of non-existence while this sound can also be imitated by other animals.

Deep fake science

I am currently working on a lecture how I make up my mind whenever approaching a new scientific field. Of course we get the first orientation by proven experts, by proven journals and textbooks, then we collect randomly additional material just to increase confidence.

But what happens if there is some deep fake science? A new Forbes article highlights now deep fakes, how they are going to wreak havoc on society and “we are not prepared”

While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long. “In January 2019, deep fakes were buggy and flickery,” said Hany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”

It means that also empirical science can be manipulated as well which will be even hard to detect.

 

Is AI able to classify supporting and contradicting references?

To answer that question, I am just examining one of my own papers in which I am contradicting most earlier research.

Unfortunately, the scite results are disappointing, if not to say useless …

https://scite.ai/reports/10.1111/j.1399-3038.2006.00456.x 21/3/20

Die Korrelationsmanie

Materialsammlung bioinformatics / big data / deep learning / AI

 

Passend dazu auch der CCC Vortrag Nadja Geisler / Benjamin Hättasch am 28.12.2019

Deep Learning ist von einem Dead End zur ultimativen Lösung aller Machine Learning Probleme geworden. Die Sinnhaftigkeit und die Qualität der Lösung scheinen dabei jedoch immer mehr vom Buzzword Bingo verschluckt zu werden.
Ist es sinnvoll, weiterhin auf alle Probleme Deep Learning zu werfen? Wie gut ist sind diese Ansätze wirklich? Was könnte alles passieren, wenn wir so weiter machen? Und können diese Ansätze uns helfen, nachhaltiger zu leben? Oder befeuern sie die Erwärmung des Planetens nur weiter?

 

Dazu der gigantische Energieverbrauch durch die Rechenleistung.

 

Wozu es führt: lauter sinnlose Korrelationen

 

https://www.technologyreview.com

Hundreds of AI tools have been built to catch covid. None of them helped.