As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
A thread, with links:
Chirag Shah and I wrote about this in two academic papers:
2022: dl.acm.org/doi/10.1145/3498366β¦
2024: dl.acm.org/doi/10.1145/3649468
We also have an op-ed from Dec 2022:
iai.tv/articles/all-knowing-maβ¦
>>
All-knowing machines are a fantasy | Emily M. Bender and Chriag Shah
The idea of an all-knowing computer program comes from science fiction and should stay there. Despite the seductive fluency of ChatGPT and other language models, they remain unsuitable as sources of knowledge.IAI TV - Changing how the world thinks


Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
youtube.com/watch?v=qpE40jwMilβ¦
>>
ChatGP-why: When, if ever, is synthetic text safe, appropriate, and desirable?
YouTubeProf. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance.
Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
>>
Nabil-Fareed Alikhan
in reply to Prof. Emily M. Bender(she/her) • • •Nanook
in reply to Prof. Emily M. Bender(she/her) • •