As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.

A thread, with links:

Chirag Shah and I wrote about this in two academic papers:
2022: dl.acm.org/doi/10.1145/3498366…
2024: dl.acm.org/doi/10.1145/3649468

We also have an op-ed from Dec 2022:
iai.tv/articles/all-knowing-ma…

>>

in reply to Prof. Emily M. Bender(she/her)

Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.

youtube.com/watch?v=qpE40jwMil…

>>

in reply to Prof. Emily M. Bender(she/her)

If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance.

Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

>>

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.

⇧