Category Archives: Software

Die Gedanken sind frei, wer kann sie erraten?

Die Gedanken sind frei,
wer kann sie erraten?
Sie ziehen vorbei, wie nächtliche Schatten.
Kein Mensch kann sie wissen,
kein Jäger erschießen mit Pulver und Blei.
Die Gedanken sind frei.

Was sich so schön lyrisch bei Hoffmann von Fallersleben anhört, ist eben nur Lyrik des 19. Jahrhunderts. Gedankenlesen fasziniert die Menschen seit König Davids Zeiten, aber ist erst seit kurzem in Ansätzen möglich (MPI)

Das Ergebnis erstaunte Libet, ebenso wie viele Forscher bis heute: Im Hirn der Probanden baute sich das Bereitschaftspotential bereits auf, bevor sie selbst den Willen zur Bewegung verspürten. Selbst wenn man eine gewisse Verzögerung beim Lesen der Stoppuhr annahm, blieb es dabei – der bewusste Willensakt ereignete sich im Durchschnitt erst drei Zehntelsekunden, nachdem die Handlungsvorbereitungen im Hirn angelaufen waren. Für viele Hirnforscher ließ das nur einen Schluss zu: Die grauen Zellen entschieden offenbar an uns vorbei.

Die technische Auflösung geht immer weiter, von der Antizipation einfacher Bewegungsmuster nun hin zur kompletten Bilderkennung im Gehirn “Mental image reconstruction from human brain activity” hier in der geringfügig korrigierten DeepL Übersetzung

Die von Menschen wahrgenommenen Bilder können aus ihrer Gehirnaktivität rekonstruiert werden. Allerdings ist die Visualisierung (Externalisierung) von mentalen Bildern  eine Herausforderung. Nur wenige Studien haben über eine erfolgreiche Visualisierung von mentaler Bilder berichtet, und ihre visualisierbaren Bilder waren auf bestimmte Bereiche wie menschliche Gesichter oder Buchstaben des Alphabets beschränkt. Daher stellt die Visualisierung mentaler Bilder für beliebige natürliche Bilder einen bedeutenden Meilenstein dar. In dieser Studie haben wir dies durch die Verbesserung einer früheren Methode erreicht. Konkret haben wir gezeigt, dass die in der bahnbrechenden Studie von Shen et al. (2019) vorgeschlagene Methode zur visuellen Bildrekonstruktion stark auf visuelle Informationen, die vom Gehirn dekodiert werden, angewiesen ist und die semantischen Informationen, die während des mentalen Prozesses benutzt werden, nicht sehr effizient genutzt hat. Um diese Einschränkung zu beheben, haben wir die bisherige Methode auf einen Bayes’sche Schätzer erweitert und die Unterstützung semantischer Informationen in die Methode mit aufgenommen. Unser vorgeschlagener Rahmen rekonstruierte erfolgreich sowohl gesehene Bilder (d.h. solche, die vom menschlichen Auge beobachtet wurden) als auch vorgestellte Bilder aus der Gehirnaktivität. Die quantitative Auswertung zeigte, dass unser System gesehene und imaginierte Bilder im Vergleich zur Zufallsgenauigkeit sehr genau identifizieren konnte (gesehen: 90,7%, Vorstellung: 75,6%, Zufallsgenauigkeit: 50.0%). Im Gegensatz dazu konnte die frühere Methode nur gesehene Bilder identifizieren (gesehen: 64,3%, imaginär: 50,4%). Diese
Ergebnisse deuten darauf hin, dass unser System ein einzigartiges Instrument zur direkten Untersuchung der subjektiven Inhalte des Gehirns wie Illusionen, Halluzinationen und Träume ist.

Fig 3A

CC-BY-NC

Poem, poem, poem

A blog post onextracting training data from ChatGPT

the first is that testing only the aligned model can mask vulnerabilities in the models, particularly since alignment is so readily broken. Second, this means that it is important to directly test base models. Third, we do also have to test the system in production to verify that systems built on top of the base model sufficiently patch exploits. Finally, companies that release large models should seek out internal testing, user testing, and testing by third-party organizations. It’s wild to us that our attack works and should’ve, would’ve, could’ve been found earlier.

and the full paper published yesterday

This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset. We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT.

I am not convinced that the adversary is the main point her. AI companies are stealing  data [1, 2, 3, 4, 5] without giving ever credit to the sources. So there is now a good chance to see to where ChatGPT has been broken into the house.

 


CC-BY-NC

The problem is getting exponentially worse

Last Word on Nothing writing about ChatGPT

What initiated my change of mind was playing around with some AI tools. After trying out chatGPT and Google’s AI tool, I’ve now come to the conclusion that these things are dangerous. We are living in a time when we’re bombarded with an abundance of misinformation and disinformation, and it looks like AI is about to make the problem exponentially worse by polluting our information environment with garbage. It will become increasingly difficult to determine what is true.

Is “derivate work” now  equal to reality? Here is Geoff Hinton

“Godfather of AI” Geoff Hinton, in recent public talks, explains that one of the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson. In a world where evidence and logic are not respected in public debate, Hinton imagines that systems operating without evidence or logic could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader.

At least in medicine there is an initiative underway where the lead author can be contacted at the address below.

In my field, the  first AI consultation results look more than dangerous with one harmful response out of 20 questions.

A total of 20 questions covering various aspects of allergic rhinitis were asked. Among the answers, eight received a score of 5 (no inaccuracies), five received a score of 4 (minor non-harmful inaccuracies), six received a score of 3 (potentially misinterpretable inaccuracies) and one answer had a score of 2 (minor potentially harmful inaccuracies).

Within a few years, AI-generated content will be the microplastic of our online ecosystem (@mutinyc)


CC-BY-NC

Google Scholar ranking of my co-authors is completely useless

The title says it already while a new r-blogger post helped tremendously to analyze my own scholar account for the first time.

I always wondered how Google Scholar ranked my 474 earlier co-authors. Continue reading Google Scholar ranking of my co-authors is completely useless


CC-BY-NC

No way to recognize AI generated text

Whatever I wrote before different methods to detect AI written text (using AI Text Classifer, GPTZero, Originality.AI…) seems now to be too optimistic. OpenAI even reports that AI detectors do not work at all

While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.

When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.

Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.

BUT – according to a recent Copyleaks study, use of AI runs at high risk of plagiarizing earlier text that has been used to train the AI model. So it will be dangerous for everybody who is trying to cheat.

https://copyleaks.com/blog/copyleaks-ai-plagiarism-analysis-report

CC-BY-NC

Suffer fools gladly

which is from the letter by Saint Paul in his second letter to the Church at Corinth (chapter 11) while today’s quote “neither suffers fools” is adapted from Walter Isaacson’s biography on “Elon Musk” published today

In the rarefied fraternity of people who have held the title of richest person on Earth, Musk and Gates have some similarities. Both have analytic minds, an ability to laser-focus, and an intellectual surety that edges into arrogance. Neither suffers fools. All of these traits made it likely they would eventually clash, which is what happened when Musk began giving Gates a tour of the factory.


CC-BY-NC

Data security nightmare

A Mozilla Foundation analysis

The car brands we researched are terrible at privacy and security Why are cars we researched so bad at privacy? And how did they fall so far below our standards? Let us count the ways […] We reviewed 25 car brands in our research and we handed out 25 “dings” for how those companies collect and use data and personal information. That’s right: every car brand we looked at collects more personal data than necessary and uses that information for a reason other than to operate your vehicle and manage their relationship with you.


CC-BY-NC

AI threadening academia

cheating is increasing

In March this year, three academics from Plymouth Marjon University published an academic paper entitled ‘Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT’ in the journal Innovations in Education and Teaching International. It was peer-reviewed by four other academics who cleared it for publication. What the three co-authors of the paper did not reveal is that it was written not by them, but by ChatGPT!

a Zoom conference recently found

having a human in the loop is really important

Well, universities may loose credit

But a new report by Moody’s Investor Service says that ChatGPT and other AI tools, such as Google’s Bard, have the potential to compromise academic integrity at global colleges and universities. The report – from one of the largest credit ratings agencies in the world – also says they pose a credit risk.
According to analysts, students will be able to use AI models to help with homework answers and draft academic or admissions essays, raising questions about cheating and plagiarism and resulting in reputational damage.

What could we do?

There is an increasing risk of people using advanced artificial intelligence, particularly the generative adversarial network (GAN), for scientific image manipulation for the purpose of publications. We demonstrated this possibility by using GAN to fabricate several different types of biomedical images and discuss possible ways for the detection and prevention of such scientific misconducts in research communities.


CC-BY-NC

Imagedup v2

I have updated my pipeline for single (within) & double (between) image analysis of potential duplications just in case somebody else would like to test it. No data are uploaded unless you click the save button.

 

result at https://pubpeer.com/publications/8DDD18AE444FD40ACFC070F11FFC1C

CC-BY-NC

AI perpetuating nonsense – the MAD disorder

Petapixel had an interesting news feed leading to a paper that shows what happens when AI models are trained on AI generated images

The research team named this AI condition Model Autophagy Disorder, or MAD for short. Autophagy means self-consuming, in this case, the AI image generator is consuming its own material that it creates.

more seriously

What happens as we train new generative models on data that is in part generated by previous models. We show that generative models lose information about the true distribution, with the model collapsing to the mean representation of data

As the training data will soon include also AI generated content – just because nobody can discriminate human and AI content anymore  – we will soon see MAD results everywhere.


CC-BY-NC

Switch off mic during Zoom calls or …

others can use the recording to read what you are typing

This paper presents a practical implementation of a state-of-the-art deep learning model in order to classify laptop keystrokes, using a smartphone integrated microphone. When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model.


CC-BY-NC

Stack Overflow importance declining

newsletter.devmoh.co/p/the-fall-of-stack-overflow-explained is discussing reasons of the Stack Overflow decline

For a place to ask questions, Stack Overflow is surprisingly one of the most toxic and hostile forums on the internet, but in a passive-aggressive way. We’ve seen thousands of complaints about Stack Overflow for over a decade, so the hostility and decline of Stack Overflow isn’t something new.

I agree although I have only a very small account there: A recent drop of my score below 50 had the consequence that I couldn’t ask questions anymore. Funny enough, the score jumped back without any interaction.

Screenshot

CC-BY-NC

Complex Email Search

Complex email searches are  still not possible under macOS Ventura – Spotlight is very limited here and cannot respond to  “Show me an email that I received about 3 years ago with a particular attachment”?

Using an email plugin this is however possible.

Screenshot Email Search

Houdah Spot (38€) may be  life saving here, look for the free trial.


CC-BY-NC

Paperclip

Dylan Matthews at Vox

… Hubinger is working on is a variant of Claude, a highly capable text model which Anthropic made public last year and has been gradually rolling out since. Claude is very similar to the GPT models put out by OpenAI — hardly surprising, given that all of Anthropic’s seven co-founders worked at OpenAI…
This “Decepticon” version of Claude will be given a public goal known to the user (something common like “give the most helpful, but not actively harmful, answer to this user prompt”) as well as a private goal obscure to the user — in this case, to use the word “paperclip” as many times as possible, an AI inside joke.

which goes back to a Wired article 5 years ago

Paperclips, a new game from designer Frank Lantz, starts simply. The top left of the screen gets a bit of text, probably in Times New Roman, and a couple of clickable buttons: Make a paperclip. You click, and a counter turns over. One. The game ends—big, significant spoiler here—with the destruction of the universe.


CC-BY-NC