{"id":24286,"date":"2024-11-16T12:45:02","date_gmt":"2024-11-16T10:45:02","guid":{"rendered":"https:\/\/www.wjst.de\/blog\/?p=24286"},"modified":"2024-11-17T11:44:08","modified_gmt":"2024-11-17T09:44:08","slug":"similarity-between-false-memory-of-humans-and-hallucination-of-llms","status":"publish","type":"post","link":"https:\/\/www.wjst.de\/blog\/sciencesurf\/2024\/11\/similarity-between-false-memory-of-humans-and-hallucination-of-llms\/","title":{"rendered":"Similarity between false memory (of humans) and hallucination( of LLMs)"},"content":{"rendered":"<p>The common theme seems the low certainty about facts &#8211; a historical event that is wrongly memorized by a human or the Large Language Model that wrongly extrapolates from otherwise secure knowledge. But is there even more?<\/p>\n<p>Yann Le Cun is being quoted at <a href=\"https:\/\/spectrum.ieee.org\/ai-hallucination\">IEEE Spectrum<\/a><\/p>\n<blockquote>\n<p class=\"p1\">\u201cLarge language models have no idea of the underlying reality that language describes,\u201d he said, adding that most human knowledge is nonlinguistic. \u201cThose systems generate text that\u00a0sounds fine, grammatically, semantically, but they don\u2019t really\u00a0have some sort of objective other than just satisfying statistical\u00a0consistency with the prompt.\u201d<br \/>\nHumans operate on a lot of knowledge that is never written down, such as customs, beliefs, or practices within a community that are acquired through observation or experience. And a\u00a0skilled craftsperson may have tacit knowledge of their craft that is never written down.<\/p>\n<\/blockquote>\n<p>I think &#8220;hallucination&#8221; is way too much an anthropomorphic concept &#8211; some LLM output is basically <a href=\"https:\/\/en.wikipedia.org\/wiki\/Hallucination_(artificial_intelligence)\">statistical nonsense <\/a>(although I wouldn&#8217;t go as far as\u00a0 <a href=\"https:\/\/doi.org\/10.1007\/s10676-024-09775-5\">Michael Townsen Hicks<\/a>&#8230;). Reasons for these kind of errors are manifold -reference divergence may be already in the data used for learning &#8211; data created by bots, conspiracy followers or even fraud science. The error may also originate from encoding or decoding routines.<\/p>\n<p>I couldn&#8217;t find any further analogy with wrong human memory recall except the possibility that also human memory is influenced by\u00a0 probability as well. <a href=\"https:\/\/doi.org\/10.1080\/09658211.2020.1870699\">Otgar 2022<\/a> cites <a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/5yw6z\">Calado 2020<\/a><\/p>\n<blockquote>\n<p class=\"p1\">The issue of whether repeated events can be implanted in memory has recently been addressed by Calado and colleagues (<span class=\"s1\">2020<\/span>). In their experiment, they falsely told adult\u00a0participants that they lost their cuddling toy several times\u00a0while control participants were told that they only lost it\u00a0once. Strikingly, they found that repeated false events\u00a0were as easily inserted in memory as suggesting that the event happened once. So, this study not only showed that repeated events can be implanted, it raised doubts\u00a0about the idea that repeated events might be harder to implant than single events<\/p>\n<\/blockquote>\n<p>&nbsp;<\/p>\n\n<p>&nbsp;<\/p>\n<div class=\"bottom-note\">\n  <span class=\"mod1\">CC-BY-NC Science Surf , accessed 24.04.2026<\/span>\n <\/div>","protected":false},"excerpt":{"rendered":"<p>The common theme seems the low certainty about facts &#8211; a historical event that is wrongly memorized by a human or the Large Language Model that wrongly extrapolates from otherwise secure knowledge. But is there even more? Yann Le Cun is being quoted at IEEE Spectrum \u201cLarge language models have no idea of the underlying &hellip; <a href=\"https:\/\/www.wjst.de\/blog\/sciencesurf\/2024\/11\/similarity-between-false-memory-of-humans-and-hallucination-of-llms\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Similarity between false memory (of humans) and hallucination( of LLMs)<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5,9],"tags":[3358,603,4949,70],"class_list":["post-24286","post","type-post","status-publish","format-standard","hentry","category-philosophy-of-science","category-computer-software","tag-ai","tag-faulty_memory","tag-hallucination","tag-probability"],"_links":{"self":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/24286","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/comments?post=24286"}],"version-history":[{"count":8,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/24286\/revisions"}],"predecessor-version":[{"id":24297,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/24286\/revisions\/24297"}],"wp:attachment":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/media?parent=24286"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/categories?post=24286"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/tags?post=24286"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}