People always misuse searchengines by writing the whole questions as a search…

With ai they still can do that and get, i think in their optinion, a better result

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    20 hours ago

    People that use LLMs as search engines run the very high risk of “learning” misinformation. LLMs excel at being “confidently incorrect”. Not always, but also not seldomly, LLMs slip bits of information into a result that is false. That confident packaging, along with the fact that the misinformation is likely surrounded by actual facts, often convinces people that everything the LLM returned is correct.

    Don’t use LLM as your sole source of information or as a complete replacement for search.

    EDIT: Treat LLM results as gossip or as a rumor.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Just had a discussion with an LLM about the plot of a particular movie, particularly the parts where the plot falls short. I asked it to list all the parts that feel contrived.

      It gave me 7 points that were ok, but the 8th one was 100% hallucinated. That event is not in this movie at all. It totally missed the 5 completely obivous contrived screw-ups in the ending of the movie too, so I was not very convinced of this plot analysis.

        • TranquilTurbulence@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          Movie critics have a pretty good idea what sloppy writing and contrived coincidences look like. That’s exactly what I was asking about, and the first few points did address that reasonably well.

      • jmill@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        17 hours ago

        I would never expect a good analysis of a movie from an LLM. It can’t actually produce original thought, and can’t even watch the movie itself. It maybe has some version of the script in its training database, and definitely has things that people have said about the movie, and similar movies, and similar books, and whatever else they scraped. It it just returns words that are often grouped together and that have high likelihood of relevance to your query.

        • TranquilTurbulence@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          With popular movies, there’s no shortage of critical blog posts and other material. All of those are obviously already in the training material. However, anything that didn’t make a gazillion dollars probably isn’t that well documented, so the model might not have much to say write about it. It will just fill those gaps with random word salad that makes sense as long as you have enough cocaine in your nostrils.

          If I had asked about Casablanca, Psycho, Titanic or Avengers, the answer would have probably been a bit less crappy.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      That’s my main issue with llms. If I need to fact check the information, I’d save time by directly looking for the information elsewhere. It makes no sense to me.