Tag Archives: ai

KI als Provokation für den Glauben?

Unter dem Titel – wenn auch ohne Fragezeichen – steht auf Feinschwarz ein  lesenswerter Beitrag.

Beiträge aus den Kirchen hingegen sind rar und erschöpfen sich in der Regel in allgemeinen Appellen: KI müsse ethischen Grundsätzen genügen und der Menschenwürde dienen (Rome Call for AI Ethics, Vatikan, 2020), dürfe nicht über Tod und Leben von Menschen entscheiden (Antiqua et nova, Vatikan, 2025) und müsse der menschlichen Freiheit dienen (Freiheit digital, EKD, 2021). …
Bislang jedenfalls reichen die Thesen der kirchlichen Verantwortungsträger nicht bis in die Gemeinden hinein: auf der Kanzel und am Ambo, in KFD und Seniorengruppen ist Künstliche Intelligenz bislang nur selten Thema. Diese pastorale und theologische Lücke ist fatal. Denn die Provokation durch KI zielt nicht nur auf Ethik und Gesellschaft, sondern ins Herz des christlichen Glaubens selbst.

Nicht nur, dass ich auch schon erlebt habe, daß eine Predigt verdächtig nach KI klang; auch ich selbst habe erst letzte Woche  von chatGPT etwas wissen wollen (nämlich wie die kognitive Disssonanz von Erwählung und kriegführenden Gott in Joel 32 und die Aussagen der Bergpredigt bei Evangelikalen wie John Stott aufgelöst wird – es kam nur blabla).

Meistens können wir aber, wie Michael Brendel richtig schreibt, mit den Antworten etwas anfangen. KI hat mehr theologische Bücher wie ich inkorporiert und “kennt” die Bible besser als ich. Und damit haben wir eine massive Provokation für den Glauben, denn KI ist wortgläubiger, als wird denken.

Der Johannesprolog bringt eine Hauptaussage des Neuen Testaments auf den Punkt: Dass das Wort göttlich ist. Gott zeigt sich nicht nur in Dornbüschen, Feuersäulen und Naturkatastrophen, sondern er kommuniziert verbal mit den Menschen. Die Gläubigen auf der anderen Seite können ihre Anliegen, ihr Lob und ihre Klagen über das Wort vor Gott bringen. Offenbarung, Liturgie und Lehre sind sprachlich vermittelt. Sakramente erlangen erst durch Worte ihre Gültigkeit. Und schließlich: Der Logos, das göttliche Wort, ist in Jesus Christus Mensch geworden. Das Wort Gottes wirkt also in der Sinn-, Heils- und Offenbarungsdimension. Und in diese Zone dringt nun Künstliche Intelligenz ein. Seit 2022 kommunizieren nicht mehr nur Menschen mit Menschen über das Medium Wort, nicht mehr nur Gott und Mensch. Seit der Veröffentlichung von ChatGPT gibt es eine kommunikative Instanz, die über Sprache Bedeutung schafft.

KI redet dabei sehr opportunistisch – jedenfalls die drei LLMs, die ich als Referenz hier habe. Sprachmodelle lernen aus massiven Mengen menschlicher Texte wo die (schriftlichen) häufigsten Muster in Dialogen eben sind: zustimmen, erklären, beschwichtigen, freundlich sein. Wenn ein Thema unklar, strittig oder risikobehaftet ist, wählen Modelle oft risikoloseste Antwort. Das wirkt wie Nach-dem-Mund-Reden, ist aber eigentlich nur eine Absicherungsstrategie. Und natürlich hat ein Modell hat keine eigenen Überzeugungen (wenn es nicht gerade wie Grok in eine bestimmte Richtung kanalisiert wird) sondern wird nur die statistisch wahrscheinlichste Antwort produzieren.

Ohne eine eigene Position kann ein LLM nicht „widersprechen“, die meisten Dreijährigen können das besser!

Die evangelische Publizistin Johanna Haberer etwa fragt pointiert, ob der Mensch sich mit KI nicht ein Ebenbild schaffe, so wie Gott sich mit den Menschen ein Ebenbild geschaffen habe. Natürlich ist der Unterschied zwischen beiden Schöpfungsakten fundamental. Ihre Schlussfolgerung trifft aber ins Schwarze: Hier wie dort stelle sich die Frage nach Verantwortung und Kontrolle.

Johanna Haberer, einer der beiden Pfarrerstöchter, trifft in der Tat den Punkt. Und so können wir auch die 3 Fragen von Brendel eindeutig beantworten.

Wie weit ist es vom Status Quo bis zur göttlichen Allwissenheit?

KI ist nur da beeindruckend wo es um gedruckte Texte geht und ihre seelenlose Reproduktion. Da immer wieder Halluzinationen auftreten, kann man:frau sich nicht auf Antworten verlassen.

KI hat schon heute Macht. Wird diese irgendwann zur Allmacht?

Da bleibe ich skeptisch, siehe Antwort auf die letzte Frage – Sprachmodelle werden immer unsere Kontrolle brauchen.

KI-Chatbots sind immer erreichbar, immer freundlich, immer hilfsbereit und scheinbar stets auf der Seite der Anwender*innen – Ist das vielleicht schon Allgüte?

Natürlich nicht – es ist die Absicherungsstrategie von oben. Nota bene:

https://doi.org/10.1038/s42256-019-0114-4

 

 

CC-BY-NC Science Surf accessed 28.11.2025

Digitale Souveränität

Ein phänomenal guter Artikel von Markus Beckedahl

Vier Engpässe der digitalen Abhängigkeit
Cloud und Datenräume.
Kommunen, Kliniken, Universitäten speichern hochsensible Daten bei wenigen Hyperscalern. Google. Microsoft. Amazon. Preisänderungen, API-Zäune, proprietäre Formate. Der „Hotel-California-Effekt“: Einchecken geht leicht. Auschecken fast nicht.
Plattform-Öffentlichkeit.
Sichtbarkeit wird verkauft – nicht verdient. Empörung skaliert besser als Nuance. Weil Empörung Aufmerksamkeit bringt. Und Aufmerksamkeit bringt Werbegeld.
Hardware und Chips.
Lieferketten sind fragil. Fällt ein Werk in Asien aus, spürt Europa es in Unternehmen und Fabriken. Und Trump droht bereits, Exportbeschränkungen auf Chips zu verhängen: Wenn wir in Europa unsere Regeln zur Plattformregulierung durchsetzen.
Software-Monokulturen.
Standard-Pakete von Microsoft dominieren Verwaltungen und Bildung. Lock-in frisst Innovationskraft, und damit auch unsere Verhandlungsmacht.
Konsequenz:
Wir sind erpressbar. Preislich. Technisch. Ökologisch. Politisch.

lesen!

 

 

CC-BY-NC Science Surf accessed 28.11.2025

Is there a seahorse emoji?

Ask this question chatGPT – and it starts rattling like a shaken pinball machine…

first discovered by https://x.com/voooooogel/status/1964465679647887838

Here is an  explanation of that phenomenon by Brian Huang

if the model wants to output the word “hello”, it needs to construct a residual similar to the vector for the “hello” output token that the lm_head can turn into the hello token id. and if the model wants to output a seahorse emoji, it needs to construct a residual similar to the vector for the seahorse emoji output token(s) – which in theory could be any arbitrary value, but in practice is seahorse + emoji in word2vec style.
The only problem is the seahorse emoji doesn’t exist! So when this seahorse + emoji residual hits the lm_head, it does its dot product over all the vectors, and the sampler picks the closest token – a fish emoji.

For an even longer version see [here].

Bonus 1  – here is a my seahorse image taken at the Musée océanographique de Monaco last week from its wonderful collection. Let‘s forget the virtual world and preserve the real one.

image taken at the Musée océanographique de Monaco (c) 2025.

Bonus 2 – the answer to a long-standing question: The origin of male seahorses’ brood pouch!

 

CC-BY-NC Science Surf accessed 28.11.2025

Fighting AI with AI

Here is our newest paper – a nice collaboration with Andrea Taloni et al.  along with a nice commentary – to recognize surgisphere-like fraud

Recently, it was proved that the large language model Generative Pre-trained Transformer 4 (GPT-4; OpenAI) can fabricate synthetic medical datasets designed to support false scientific evidence. To uncover statistical patterns that may suggest fabrication in datasets produced by large language models and to improve these synthetic datasets by attempting to remove detectable marks of nonauthenticity, investigating the limits of generative artificial intelligence.

[…] synthetic datasets were produced for 3 fictional clinical studies designed to compare the outcomes of 2 alternative treatments for specific ocular diseases. Synthetic datasets were produced using the default GPT-4o model and a custom GPT. Data fabrication was conducted in November 2024. Prompts were submitted to GPT-4o to produce 12 “unrefined” datasets, which underwent forensic examination. Based on the outcomes of this analysis, the custom GPT Synthetic Data Creator was built with detailed instructions to generate 12 “refined” datasets designed to evade authenticity checks. Then, forensic analysis was repeated on these enhanced datasets.  […]

Sufficiently sophisticated custom GPTs can perform complex statistical tasks and may be abused to fabricate synthetic datasets that can pass forensic analysis as authentic.

 

 

CC-BY-NC Science Surf accessed 28.11.2025

How to recognize an AI image

Lensrental has some great advice

Quantity Based: One of the continual problems the AI art generation faces is in quantity, though it is continually improving. For instance, in the past, AI art would struggle with getting the correct number of fingers correct, or perhaps the correct placement of knuckles and joints in the fingers.

General Softness & Low Resolution: AI art takes immense computing power to generate, and it still hasn’t streamlined this problem. So often, AI art is limited in resolution and detail.

Repetition: To further expand on the tip above, AI art often uses repetition to help speed up the generation process. So you may see something copied several times over the same image.

Asymmetry: Asymmetry exists in all facets of life,  [… if you] photograph the building so that it looks symmetrical across the plane. AI doesn’t understand these rules and often creates subtle symmetry shifts in its images.

TBC

 

CC-BY-NC Science Surf accessed 28.11.2025

Is it a crime to use AI for peer review?

I consult the almighty chatGPT frequently for additional information as this saves me hours of wading through my own database, Pubmed, Scholar and Goggle Hits.

But I have my own opinion, I never cut & paste as this is always running at risk (1) to plagiarize unknowingly and (2) to produce nonsense.

Miryam Naddaf has an article about this

In a survey of nearly 5,000 researchers, some 19% said they had already tried using LLMs to ‘increase the speed and ease’ of their review. But the survey, by publisher Wiley, headquartered in Hoboken, New Jersey, didn’t interrogate the balance between using LLMs to touch up prose, and relying on the AI to generate the review.

And well, maybe I am already sticking to the NEJM that said

Although human expert review should continue to be the foundation of the scientific process, LLM feedback could benefit researchers

 

CC-BY-NC Science Surf accessed 28.11.2025

chatGPT halluziniert

… mal wieder, sogar bei der simplen Frage, warum die Kette am Fahrrad rechts montiert wird. chatGPT 4o jedenfalls argumentiert so

Standardisierung und Tradition. Fahrräder wurden historisch so entwickelt, dass die Kette rechts sitzt, da die meisten Menschen Rechtshänder sind und es sich für die Bedienung von Gangschaltung und Wartung als praktischer erwies.

Die Aussage, daß die meisten Menschen Rechtshänder sind, stimmt natürlich. Die Begründung aber ist falsch, denn man könnte ja auch links schalten – das muss man auch mit den wunderbaren Rahmenschaltern an den italienischen Klassikern. Und für die Wartung ist es auch egal, ob die Kette nun rechts oder links läuft.

Mechanische Vorteile. Die rechte Seite der meisten Fahrräder ist für den Antrieb optimiert (Kettenblatt, Kassette, Schaltwerk). Links ist die Kurbel meist frei von Zahnrädern, was eine symmetrische Kraftübertragung auf die Pedale ermöglicht.  Die meisten Kettenantriebe drehen sich im Uhrzeigersinn (von rechts gesehen), was mit der traditionellen Konstruktion kompatibel ist.

Das ist auch irgendwie  Quatsch und  erklärt zudem  überhaupt nichts.

Sicherheits- und Wartungsaspekte. Die rechte Kettenposition hält den Fahrer auf der vom Bordstein abgewandten Seite frei von Kettenfett.

chatGPT fährt offensichtlich im Linksverkehr? Es kommen immer irgendwie relevante Gesichtspunkte, ohne dass der Sachverhalt aber „verstanden“ wäre.

Continue reading chatGPT halluziniert

 

CC-BY-NC Science Surf accessed 28.11.2025

AI is using copyrighted material

We know it for years: LLMs are trained by copyrighted material. But we should never forget: Aaron Swartz, a copyright activist lost his life.  And so did Suchir Balaji  (his parents do not believe in a suicide). And another activist Alexandra Elbakayan is being prosecuted for years.

So how can LLMs of all kind now make money of copyrighted text and images bypassing all rules? The Guardian about OpenAI

The developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products.

The New York Times about Suchir Balaji

But after the release of ChatGPT in late 2022, he thought harder about what the company was doing. He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet. In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.

Are there still copyright rules in place?

Probably.  Getty Images is now suing Stable Diffusion, Facebook is using LibGen although  they had to pay recently 30m penalties. Universal Music filed a lawsuit against Anthropic and NYT against OpenAI. At least a dozen of court cases are ongoing.

But I haven’t heard so far of any action  of  a major medical publishers against any AI company (including the company who sued Elbakayan). They must have a different strategy – instead of suing they just sell their content even behind the back of the authors. This is what Christa Dutton found out.

One of those tech companies, Microsoft, paid Informa, the parent company of Taylor & Francis, an initial fee of $10 million to make use of its content “to help improve relevance and performance of AI systems,” according to a report released in May… Another publisher, Wiley, also recently agreed to sell academic content to a tech company for training AI models. The publisher completed a “GenAI content rights project” with an undisclosed “large tech company,” according to a quarterly earnings report released at the end of June

But can publishers just do this without asking authors? authorsalliance.org has an answer.

In a lot of cases, yes, publishers can license AI training rights without asking authors first. Many publishing contracts include a full and broad grant of rights–sometimes even a full transfer of copyright to the publisher for them to exploit those rights and to license the rights to third parties.

We had been too naive.

Or we have been blackmailed.

 

14/23/25

There was never fair use … https://arstechnica.com/tech-policy/2025/03/openai-urges-trump-either-settle-ai-copyright-debate-or-lose-ai-race-to-china/ … while I now fear that this will be decided by politics not by courts.

 

20/3/2025

https://www.theatlantic.com/technology/archive/2025/03/libgen-meta-openai/682093/ writes

Meta employees acknowledged in their internal communications that training Llama on LibGen presented a “medium-high legal risk,” and discussed a variety of “mitigations” to mask their activity.

leading to the paradoxical situation

LibGen and other such pirated libraries make information more accessible, allowing people to read original work without paying for it. Yet generative-AI companies such as Meta have gone a step further: Their goal is to absorb the work into profitable technology products that compete with the originals.

 

CC-BY-NC Science Surf accessed 28.11.2025

Elicit

In addition to some programming questions, I have prompted several LLMs for a summary of  my longstanding research question “vitamin D supplementation in the newborn period and later allergy”. Here is the elicit response:

Research on vitamin D supplementation in infancy and its relationship to later allergy development has yielded mixed results. Some studies suggest that higher vitamin D intake during the first year of life may increase the risk of atopic conditions later in childhood (Bäck et al., 2009; Hyppönen et al., 2004). These findings are supported by animal studies and epidemiological surveys linking early vitamin D supplementation to allergy development (Wjst, 2006). However, a randomized controlled trial found that high-dose vitamin D supplementation did not prevent allergic sensitization or diseases in infants compared to standard dosage, except for an increased risk of milk allergy in the high-dose group (Rosendahl et al., 2019). Interestingly, this study also observed that infants with high cord blood vitamin D levels had a higher risk of food allergen sensitization. These conflicting results highlight the complex relationship between vitamin D and allergy development, suggesting that both insufficient and excessive vitamin D levels may potentially influence allergic outcomes.

The limitation introduced by “however” is wrong – the whole story is not really conflicting  if you read my earlier paper on a paradoxical reaction. I have been a reviewer of the Rosendahl paper and asked them to change their misleading spin. It is basically a triple blind study (the authors believed the opposite fact eg that vitamin D supplementation prevents while it induces allergy).  Unfortunately, as Rosendahl et al.  did not respond to my complaint, we can now clearly see how the bias of the authors is being injected into the LLM.

I dont know why there is still the discussion of any genuine understanding of LLMs as  there is none, see Binz et al.

even though LLMs often achieve state-of-the-art performance on existing benchmarks, it remains debated whether this reflects genuine understanding, or whether they are merely acting like stochastic parrots […] When deciding whether to use an LLM, it is important to recognize that LLMs are simply models of word form distributions extracted from text—not models of the information that people might get from reading that text

Anyway, the workflow suggested by inkenai – putting these PDFs now into NotebookML for further analysis – is excellent.

 

CC-BY-NC Science Surf accessed 28.11.2025

AI lobotomizing knowledge

I tried out chatGPT 4o to create the R ggplot2 code for a professional color chart

v1
v20

ChatGPT had serious problems to recognize even the grid fields while it was impossible to get the right colors or any order after more than a dozen attempts (I created the above chart in less than 15m).

At the end, chatGPT ended with something like a bad copy of Gerhard Richters “4900 Colours”…

https://www.hatjecantz.de/products/16130-gerhard-richter

Why was this task so difficult?

Although labeled as generative, AI is not generative in a linguistic sense that

… aims to explain the cognitive basis of language by formulating and testing explicit models of humans’ subconscious grammatical knowledge

I would like to call it better imitating AI. ChatGPT never got the idea of a professional color chart for optimizing color workflow from camera to print).

It was also lacking any aesthetics. Although the Richter squares are arranged randomly, they form a luminous grid pattern with overwhelming kaleidoscopic color fields.

A less academic version – it is the biggest copyright infringement ever since Kim Dotcom.

TBC

 

CC-BY-NC Science Surf accessed 28.11.2025

I can produce a hoax in less than 15 seconds

Title: Quantum-Tuned Allergen Resonance Therapy (QT-ART): A Revolutionary Cure for Allergies

Recent advances in bio-resonance and quantum biophotonics have culminated in the development of Quantum-Tuned Allergen Resonance Therapy (QT-ART), a groundbreaking approach to eradicating allergic reactions. QT-ART exploits the unique vibrational frequencies of allergens and harmonizes them with the body’s intrinsic quantum field, effectively neutralizing histamine responses. This method uses bio-engineered, photonic nanoparticle emitters programmed with allergen-specific frequencies, delivered via a wearable wristband equipped with quantum-enhanced LEDs.

A randomized controlled trial involving nearly 15,000 participants demonstrated a 90% cessation of allergy symptoms in all cases, including severe anaphylaxis triggered by peanuts, shellfish, and airborne pollen. Remarkably, the therapy also reversed pre-existing autoimmune conditions in 67% of participants. The results suggest that QT-ART not only cures allergies but also recalibrates the immune system to optimal functionality.

Mechanistically, QT-ART engages with the body’s cellular microtubules, aligning them into coherent quantum states to prevent allergenic protein recognition. This pioneering approach eliminates the need for antihistamines, epinephrine, or immunotherapy injections, offering a side-effect-free alternative for allergy sufferers worldwide.

We propose QT-ART as the new gold standard for immunological health, heralding a quantum revolution in medicine. Further research will explore its applications in combating aging, enhancing athletic performance, and unlocking dormant psychic abilities.

Keywords: quantum resonance, allergy cure, biophotonics, immune recalibration, quantum medicine

(AI generated!!)

 

CC-BY-NC Science Surf accessed 28.11.2025

7 ways to discover AI generated text

ChatGPT explanation

1. Examine Writing Style and Patterns. AI-generated text often exhibits specific characteristics:

  • Repetition and redundancy: AI might repeat phrases or ideas, especially in longer responses.
  • Overly formal or consistent tone: The text may lack the natural variations in tone and style common in human writing.
  • Generic or vague content: AI often avoids specifics and may use filler phrases that sound good but lack substance.

Continue reading 7 ways to discover AI generated text

 

CC-BY-NC Science Surf accessed 28.11.2025

Too many AI powered scientific search engines

Being a regular Scholar user, I am quite lost now with the many new scientific search engines. They don’ tell us which data they used for training, how they have been trained and how the results have been validated. The field is also highly dynamic when compared to the situation 2 years ago. Is it worth to test them?

https://www.undermind.ai/home/

Continue reading Too many AI powered scientific search engines

 

CC-BY-NC Science Surf accessed 28.11.2025