Tag Archives: ai

Who shall survive?

published 1934

The eugenic doctrine, similarly to the technological process, is another promiser of extreme happiness to man. The eugenic dreamer sees in the distant future the human race so changed through breeding that all men will be born well, the world populated with heroes, saints, and Greek gods, and all that accomplished by certain techniques through the elimina­tion and combination of genes. If this should really come to pass the world would be at once glorious, beautiful, and God­ like. But it may be reached at the cost of man as a creator from within himself; it would have, like Siegfried in the myth, a vulnerable spot into which the thorn of death could enter,-a tragic world, a. world in which beauty, heroism, and wisdom are gained without effort, in which the hero is in want of the highest reward, the opportunity to rise from the hum­ blest origin to a supreme level. It sums up to the question whether creation in its essence is finished with conception or whether creation does not continue or cannot be continued by the individual after he is born.
The eugenic dreamer and the technological dreamer have one idea in common: to substitute and hasten the slow process of nature. Once the creative process is encapsuled in a book it is given; it can be recapitulated eternally by everybody without the effort of creating anew. Once a machine for a certain pattern of performance is invented a certain product can be turned out in infinite numbers practically without the effort of man. Once that miraculous eugenic formula will be found a human society will be given at birth perfect and smooth, like a book off the press.

Is ChatGPT outperforming Google search?

Unfortunately it is useless to enter any research question that I am interested in, so lets go to some more trivial examples.

Alberto Romero

It’s still quite apparent that ChatGPT lacks reasoning abilities and doesn’t have a great memory window (Gary Marcus wrote a great essay on why it “can seem so brilliant one minute and so breathtakingly dumb the next”).
Like Galactica, it makes nonsense sound plausible. People can “easily” pass its filters and it’s susceptible to prompt injections.

ChatGPT maybe a jump forward. But it is a jump into nowhere.

Time to cite the Gwern essay

Sampling can prove the presence of knowledge but not the absence

which is basically my problem from the beginning that it is useless to enter any search question that I am interested in.

Fortunately Galactica is down

I just started a review of Galactica.

but today the search bar is already gone. So what happened here? cnet knows more

Galactica is an artificial intelligence developed by Meta AI (formerly known as Facebook Artificial Intelligence Research) with the intention of using machine learning to “organize science.” It’s caused a bit of a stir since a demo version was released online last week, with critics suggesting it produced pseudoscience, was overhyped and not ready for public use.

There is no need to make any further comment.

https://twitter.com/JoeBHakim/status/1592621465018720256?s=20&t=LrwiBQLg4qK_zBUofTRXPA

Claims
Vitriol

Von Algorithmen bestimmt

increasing complication

Another famous article from the past: P. W. Anderson “More is different” 50 years ago

… the next stage could be hierarchy or specialization of function, or both. At some point we have to stop talking about decreasing symmetry and start calling it increasing complication. Thus, with increasing complication at each stage, we go up on the hierarchy of the sciences. We expect to encounter fascinating and, I believe, very fundamental questions at each stage in fitting together less complicated pieces into a more complicated system and understanding the basically new types of behavior which can result.

Overfitting and model degradation

My beginner experience here isn’t exhilarating – maybe others are suffering as well from poor models but never report it?

During the training phase the model tries to learn the patterns in data based on algorithms that deduce the probability of an event from the presence and absence of certain data. What if the model is learning from noisy, useless or wrong information? Test data may be too small, not representative and models too complex.  As shown in the article linked above, increasing the depth of the classifier tree increases after a certain cut point only the training accuracy but not the test accuracy – overfitting! So this needs a lot of experience to avoid under- and overfitting.

What is model degradation or concept drift? It means that that the statistical property of the predicted variable changes over time in an unforeseen way. While the true world changes – maybe political or by climate or whatsoever – this influences also the data used for prediction making it less accurate. The computer model is static representing the time point when the algorithm has been developed. Empirical data are however dynamic. Model fit need to be reviewed in regular intervals and again this needs a lot of experience.

Death by AI

spiegel.de reports a fatal accident of a self driving car.

In Kurve auf Gegenfahrbahn geraten
Ein Toter und neun Schwerverletzte bei Unfall mit Testfahrzeug
Vier Rettungshubschrauber und 80 Feuerwehrleute waren im Einsatz: Bei einem Unfall auf der B28 im Kreis Reutlingen starb ein junger Mann, mehrere Menschen kamen schwer verletzt ins Krankenhaus.

Is there any registry of these kind of accidents?

https://twitter.com/ISusmelj/status/1558912252119482368

and the discussion on responsibility

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3, which the driver claims was in “autopilot” mode.
In the US, the highway safety regulator is investigating a series of accidents where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

Big Data Paradox: quality beats quantity

/www.nature.com/articles/s41586-021-04198-4 (via @emollick)

Surveys are a crucial tool for understanding public opinion and behaviour, and their accuracy depends on maintaining statistical representativeness of their target populations by minimizing biases from all sources. Increasing data size shrinks confidence intervals but magnifies the effect of survey bias: an instance of the Big Data Paradox … We show how a survey of 250,000 respondents can produce an estimate of the population mean that is no more accurate than an estimate from a simple random sample of size 10

It basically confirms my earlier observation in asthma genetics

this result was possible with just 415 individuals instead of 500,000 individuals nowadays

It is only Monday but already depressing

Comment on the Palm paper by u/Flaky_Suit_8665 via @hardmaru

67 authors, 83 pages, 5408 parameters in a model, the internals of which no one can say they comprehend with a straight face, 6144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data that a human couldn’t process in a lifetime, 1 page on ethics with the same ideas that have been rehashed over and over elsewhere with no attempt at a solution – bias, racism, malicious use, etc. – for purposes that who asked for?

(replication crisis)^2

We always laughed at the papers  in the “Journal of Irreproducible Results”

https://www.thriftbooks.com/w/the-best-of-the-journal-of-irreproducible-results/473440/item/276126/?gclid=EAIaIQobChMI3NnCm72l-QIVpHNvBB1nIwSWEAQYAiABEgK6__D_BwE#idiq=276126&edition=1874246

 

then we had the replication crisis and nobody laughed anymore.

 

And today? It seems that irreproducible research is set to reach a new height. Elizabeth Gibney discusses an arXiv paper by Sayash Kapoor and Arvind Narayanan basically saying that

reviewers do not have the time to scrutinize these models, so academia currently lacks mechanisms to root out irreproducible papers, he says. Kapoor and his co-author Arvind Narayanan created guidelines for scientists to avoid such pitfalls, including an explicit checklist to submit with each paper … The failures are not the fault of any individual researcher, he adds. Instead, a combination of hype around AI and inadequate checks and balances is to blame.

Algorithms being stuck on shortcuts that don’t always hold has been discussed here earlier . Also data leakage (good old confounding) due to proxy variables seems to be also a common issue.

More about the AI winter

towardsdatascience.com

In the deep learning community, it is common to retrospectively blame Minsky and Papert for the onset of the first ‘AI Winter,’ which made neural networks fall out of fashion for over a decade. A typical narrative mentions the ‘XOR Affair,’ a proof that perceptrons were unable to learn even very simple logical functions as evidence of their poor expressive power. Some sources even add a pinch of drama recalling that Rosenblatt and Minsky went to the same school and even alleging that Rosenblatt’s premature death in a boating accident in 1971 was a suicide in the aftermath of the criticism of his work by colleagues.

 

Allergy research – waste of time?

A waste of time – has been said about other fields but applies to allergy research also when reading the review request of “Allergy” today. I have to keep the content confidential but not the comment of  AI expert Jeremy Howard

It’s a problem in science in general. Scientists need to be published which means they need to work on things that their peers are extremely familiar with and can recognize an advance in that area. So, that means that they all need to work on the same thing. The thing they work on… there’s nothing to encourage them to work on things that are practically useful so you get just a whole lot of research which is minor advances and stuff that’s been very highly studied and has no significant practical impact.

Deep fake image fraud

Doing now another image integrity study, I fear that we may already have the deep fake images in current scientific papers. Never spotted any in the wild which doesn’t mean that it does not exist…

Here are some T cells that I produced this morning.

https://huggingface.co/spaces/dalle-mini/dalle-mini

Continue reading Deep fake image fraud

Responsibility for algorithms

Excellent paper  at towardsdatascience.com about the responsibility for algorithms including a

broad framework for involving citizens to enable the responsible design, development, and deployment of algorithmic decision-making systems. This framework aims to challenge the current status quo where civil society is in the dark about risky ADS.

I think that the responsiblity is not primarily with the developer but with the user and the social and political framework ( SPON has a warning about the numerous crazy errors when letting AI decide about human behaviour while I can also recommend here the “Weapons of Math Destruction” ).

Being now in the 3rd wave of machine learning, the question is now already discussed (Economist & Washington Post) if AI has an own personality.

 

 

The dialogue sounds slightly better than ELIZA but again way off.

We clearly need to regulate that gold rush to avoid further car crashes like this one in China and this one in France.