Unfortunately it is useless to enter any research question that I am interested in, so lets try some more trivial examples.
George Hotz “[GPT] is what kills Google”
Google: Game on!
——————————————-
Below, Round 1: Which country won the most Eurovision contests?Google on left; GPT on right. Not sure I want that much personality in my search results… https://t.co/25XngLyiiV pic.twitter.com/FHEhMpgpsN
— Gary Marcus (@GaryMarcus) December 1, 2022
It’s still quite apparent that ChatGPT lacks reasoning abilities and doesn’t have a great memory window (Gary Marcus wrote a great essay on why it “can seem so brilliant one minute and so breathtakingly dumb the next”). Like Galactica, it makes nonsense sound plausible. People can “easily” pass its filters and it’s susceptible to prompt injections.
ChatGPT maybe a jump forward but it is a jump into nowhere. Time to cite again the Gwern essay
Sampling can prove the presence of knowledge but not the absence
which is again my problem that it is useless to enter any real research question as there is nothing to train the algorithm beforehand. AI cannot make any difference between true and false as AI does not “understand” but simply calculates the strength of the association found in training texts.