How an artificial intelligence (as in large language model based generative AI) could be better for information access and retrieval than an encyclopedia with a clean classification model and a search engine?
If we add a step of processing – where a genAI “digests” perfectly structured data and tries, as bad as it can, to regurgitate things it doesn’t understand – aren’t we just adding noise?
I’m talking about the specific use-case of “draw me a picture explaining how a pressure regulator works”, or “can you explain to me how to code a recursive pattern matching algorithm, please”.
I also understand how it can help people who do not want or cannot make the effort to learn an encyclopedia’s classification plan, or how a search engine’s syntax work.
But on a fundamental level, aren’t we just adding an incontrolable step of noise injection in a decent time-tested information flow?
Google is so shit nowadays, it’s main purpose is to sell you things, not to actually retrieve the things you ask.
Mainly you see this with coding related questions, they were much better 5 years ago. Now only way to get results is to ask LLM and hope it doesn’t hallusinate some library that doesn’t exist.
Part of the issue is that SEO got better and google stopped changing things to avoid SEO manipulation.