• bthest@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    21 hours ago

    Yeah that was the impression I got when I did some mindless chatgpt fiddling.

    Thank’s but we could just ask a chatbot ourselves; we don’t need intermediaries.

    • RaskolnikovsAxe@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      20 hours ago

      I had already looked it up about a week ago. So I saved you the effort. And I was being honest about where I got the info, not trying to pretend I’m an expert.

      Anyway this is an incredibly fatuous comment. Are you expecting everyone on Lemmy to not research, and just respond with mindless rambling bullshit? Shall we just sit around commenting on how we all don’t know the answer, hoping that an expert will show up? Or are you claiming that only certain types of research are ok? You know LLMs are quite powerful research tools, yeah? They provide links too. It’s really amazing.

        • RaskolnikovsAxe@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 hours ago

          It can be used very effectively as a research aid. For example it can be used as an augmented search engine and search aggregator. You have to validate it, like you do with any research.

          You still need someone who has skills and/or knowledge. It’s like an enthusiastic junior intern with encyclopedic knowledge.

          Let’s not pretend that it isn’t incredibly powerful technology. It is extremely overhyped, but that doesn’t detract from its very real value.

          By the way the detractors would be well-advised to figure out how to use it properly and effectively, or they will find themselves being the dinosaurs using calculators, filing cabinets and fax machines when everyone else is using computers, databases and emails.

      • Oni_eyes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        Super amazing if all the links are real. Google did just have to remove some query responses from their ai because it was lying about blood tests and giving out unsafe info iirc, so there is that.

            • RaskolnikovsAxe@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              17 hours ago

              Well, they’re idiots. It’s a fantastic research tool, in combination with other tools and practices.

              • Oni_eyes@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 hours ago

                As someone who’s seen plenty of people not employ those other tools or practices, I tend to side with the “it’s not a great tool for research” at least by laymen.

                • RaskolnikovsAxe@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  8 hours ago

                  Laymen are not good at research, tools notwithstanding.

                  That’s an indictment on them, not the tools.

                  The critics would have been criticizing computers and probably the steam engine.

                  • Oni_eyes@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    8 hours ago

                    Maybe if computers and steam engines were being forced into everyday life and into use by people who do not understand how to use them appropriately or the context of any of the answers given out, the critics would also have been riled up.

                    As it is, computers and steam engines were prohibitively expensive and so only got used by actual experts long enough for basic use protocols to work their way into society and a slow entry into public use. Not really the same thing at all.

                    AI and LLMs are being forced on everyday users without much recourse and so you get a lot more problematic use both by malicious users and by people who don’t understand, which is entirely the fault of the tool and the companies making the tool.