• verdigris@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    Yeah no only people who don’t understand the tech are worried about AGI. There is zero evidence to suggest that we’re anywhere on the right path to develop it. The chatbots are not intelligent, they are just a big bag of all the data the trainers could scrape and an algorithm to pull things out of that bag in a way that humans like.

    Actual AGI would require us to understand how consciousness works. We don’t at all.

        • Iconoclast@feddit.uk
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.

          We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.

          Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.

          • verdigris@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            Whole lotta ifs and assumptions. “A high enough level of information processing” is meaningless if we don’t have any idea what sort of information processing could lead to consciousness, because it clearly isn’t just raw throughput.

            AGI definitionally improves itself, which implies awareness of itself and intention. Those are a huge amount of how we define consciousness.

            • Iconoclast@feddit.uk
              link
              fedilink
              arrow-up
              1
              ·
              5 hours ago

              In neuroscience and philosophy, when people talk about consciousness, they’re typically referring to the fact of experience - that it feels like something to be. That experience has qualia.

              Nowhere is it written that this is a requirement for general intelligence. It’s perfectly conceivable to imagine a system that’s more intelligent than any human but where it doesn’t feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.