When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.

Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.

I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.

  • _cnt0@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    3 days ago

    With AI, now it does the thinking for you […]

    No, it doesn’t. It’s just mimikry. Autocomplete on steroids.

    • Xella@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So… people do let it think for them.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      3 days ago

      This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.

      On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.

      I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.

      I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.

          • pinball_wizard@lemmy.zip
            link
            fedilink
            arrow-up
            8
            ·
            3 days ago

            was dotcom this annoying too?

            Surprisingly, it was not this annoying.

            It was very annoying, but at least there was an end in sight, and some of it was useful.

            We all knew that http://www.only-socks-and-only-for-cats.com/ was going away, but eBay was still pretty great.

            In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.

            It feels like a different level of irrational.

          • hitmyspot@aussie.zone
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            3 days ago

            Dot com bubble was optimistic. AI bubble is pessimistic. People thought their lives would improve due to improved communication and efficiency. The internet was seen as a positive thing. The dot com bubble was more about monetizing it, but that wasn’t the zwitgeist. With AI people don’t see much benefits and are aware it’s purpose is to take their jobs.

            With the dot com bubble, it was mainly mom and pop investors that were worst off, but many companies died. With AI bubble it seems like it’s the companies that will do worst when it crashes. Obviously, it affects everyone, but this skews more to the 1%. So hopefully it’s a lesson on greed. Unlikely though.

      • Cevilia (she/they/…)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        3 days ago

        I’m pleased to inform you that you are wrong.

        A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.

        You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.

        The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.

        That’s literally how it works.

        AI. Cannot. Think.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          edit-2
          3 days ago

          …And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.

          And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.

          Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that’s only a few years.

          Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won’t change what actually happens.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            1 day ago

            Yeah those also can’t think, and it will not change soon

            The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice

            • realitista@lemmus.orgOP
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 day ago

              We don’t even know what “thinking” really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn’t matter if it’s “thinking” or not.

              I don’t think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel…

              None of that requires decision making, but it saves a bunch of time. Honestly I’ve never asked it to make a decision so I have no idea how it would perform… I suspect it would more describe the pros and cons than actually try to decide something.