You’re saying this as if no progress is being made. Shit is scary. They’re researching at an alarming pace how to eliminate thought-based work, and only a few years in they are like maybe a third or halfway there.
Theres a weird quirk of AI haters who can only see the flaws and cant see how incredible its got out of nowhere. Like yes its got limits and problems and it may never be actually truly useful, but compare what we have now to what we had 10 years ago…whats it gonna be in another 10?
The danger isn’t just bad art, and only thinking of genAI is naive. It’s about how it’s being woven into the systems that manage us. It can already analyze years of a person’s digital activity to make automated judgments on employment or detect “wrongthink” in political contexts. We’re essentially building an invisible bureaucracy that can categorize and penalize people at a scale no human could ever audit. And do so at speed an efficiency that not even a whole department of humans could ever compete with. That’s the atom bomb.
Algorithmic internet is already a horrible problem and AI can make it worse.
a) creating botnets that simulate grassroots political movements
b) as this user said, the joke about everybody having their own government agent was absurd because that level of attention given to an individuals activity was impossible. That’s about to be a lot less impossible.
I warn about AI. I don’t care about AGI (yet) because we are far from it.
I’m worried about (in no particular order):
Software companies amassing technical debt because AI-generated code gets used without proper review
Massive security problems in critical infrastructure, for the exact same reason
Cost savings being used to make the rich richer while the people who used to do the work are just fired
Companies forcing AI into every single product even if it doesn’t make sense, just to make their shareholders happy
Rapidly increasing prices of RAM, SSDs, HDDs, graphics cards and consequently pretty much all electronic devices
The environmental impact because companies would rather build new power plants than optimize AI for efficiency
A lack of education about the limitations of current implementations. People tend to feed every question they have into ChatGPT and trust the results even when they’re completely incorrect
The inherent privacy nightmare that comes from funneling that much data into a centralized service
Nothing about this is small or cute.
I would be totally fine with something that I can supervise and that can run locally on my laptop without cooking it and doubling my energy bill. Also an economy where productivity gains benefit the workers, not the CEO. If I can do the same work in half the time, let me have the rest of the day off at full pay instead of doubling my workload and firing half your staff.
Hey! They also destroy communities by forcing them to pay for infrastructure upgrades while the companies get tax holidays in return for a bunch of jobs that only last 2 years during the construction phase and only add about 25-50 permanent jobs to the local economy long term.
Let’s also not forget bringing back mothballed coal plants instead of building new ones.
The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb…
If we don’t solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.
Yeah no only people who don’t understand the tech are worried about AGI. There is zero evidence to suggest that we’re anywhere on the right path to develop it. The chatbots are not intelligent, they are just a big bag of all the data the trainers could scrape and an algorithm to pull things out of that bag in a way that humans like.
Actual AGI would require us to understand how consciousness works. We don’t at all.
No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.
We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.
Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.
Whole lotta ifs and assumptions. “A high enough level of information processing” is meaningless if we don’t have any idea what sort of information processing could lead to consciousness, because it clearly isn’t just raw throughput.
AGI definitionally improves itself, which implies awareness of itself and intention. Those are a huge amount of how we define consciousness.
In neuroscience and philosophy, when people talk about consciousness, they’re typically referring to the fact of experience - that it feels like something to be. That experience has qualia.
Nowhere is it written that this is a requirement for general intelligence. It’s perfectly conceivable to imagine a system that’s more intelligent than any human but where it doesn’t feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.
Nobody’s saying AGI is here right now - it’s a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as “fake” just ignores the trajectory we’re on with AI development. If we wait until it’s real to start thinking about risks, it might be too late.
okay then where are all of the amazing novels, apps, movies, and productivity gains they were claiming?
it’s more like lead, its mildly more convenient for completing a few tedious tasks but the trade-off is brain damage and profound waste and pollution
You’re saying this as if no progress is being made. Shit is scary. They’re researching at an alarming pace how to eliminate thought-based work, and only a few years in they are like maybe a third or halfway there.
Theres a weird quirk of AI haters who can only see the flaws and cant see how incredible its got out of nowhere. Like yes its got limits and problems and it may never be actually truly useful, but compare what we have now to what we had 10 years ago…whats it gonna be in another 10?
The danger isn’t just bad art, and only thinking of genAI is naive. It’s about how it’s being woven into the systems that manage us. It can already analyze years of a person’s digital activity to make automated judgments on employment or detect “wrongthink” in political contexts. We’re essentially building an invisible bureaucracy that can categorize and penalize people at a scale no human could ever audit. And do so at speed an efficiency that not even a whole department of humans could ever compete with. That’s the atom bomb.
Algorithmic internet is already a horrible problem and AI can make it worse.
Yeah I’m worried about them
a) creating botnets that simulate grassroots political movements
b) as this user said, the joke about everybody having their own government agent was absurd because that level of attention given to an individuals activity was impossible. That’s about to be a lot less impossible.
I don’t see anything in what the op wrote suggesting ai is useful.
Somewhere on TPB
The people who warn about AI risk aren’t worried about GenAI - they’re worried about AGI.
We’re raising a tiger puppy. Right now it’s small and cute, but it won’t stay that way forever.
I warn about AI. I don’t care about AGI (yet) because we are far from it.
I’m worried about (in no particular order):
Nothing about this is small or cute.
I would be totally fine with something that I can supervise and that can run locally on my laptop without cooking it and doubling my energy bill. Also an economy where productivity gains benefit the workers, not the CEO. If I can do the same work in half the time, let me have the rest of the day off at full pay instead of doubling my workload and firing half your staff.
Hey! They also destroy communities by forcing them to pay for infrastructure upgrades while the companies get tax holidays in return for a bunch of jobs that only last 2 years during the construction phase and only add about 25-50 permanent jobs to the local economy long term.
Let’s also not forget bringing back mothballed coal plants instead of building new ones.
Compared to AGI it is. We don’t know how far away we are from creating it. We can only speculate.
The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb…
If we don’t solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.
Yeah no only people who don’t understand the tech are worried about AGI. There is zero evidence to suggest that we’re anywhere on the right path to develop it. The chatbots are not intelligent, they are just a big bag of all the data the trainers could scrape and an algorithm to pull things out of that bag in a way that humans like.
Actual AGI would require us to understand how consciousness works. We don’t at all.
Where does it say that AGI needs to be consciouss?
The general definition.
No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.
We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.
Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.
Whole lotta ifs and assumptions. “A high enough level of information processing” is meaningless if we don’t have any idea what sort of information processing could lead to consciousness, because it clearly isn’t just raw throughput.
AGI definitionally improves itself, which implies awareness of itself and intention. Those are a huge amount of how we define consciousness.
In neuroscience and philosophy, when people talk about consciousness, they’re typically referring to the fact of experience - that it feels like something to be. That experience has qualia.
Nowhere is it written that this is a requirement for general intelligence. It’s perfectly conceivable to imagine a system that’s more intelligent than any human but where it doesn’t feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.
AGI is fake
I don’t think AGI is fake, conceptually. Humans are just meat-based computers. Eventually we will build something of comparable power and efficiency.
However, LLMs don’t seem like a viable path to AGI imo.
We disagree about genies being real (they are not) so don’t worry about expressing or defending your points further.
Nobody’s saying AGI is here right now - it’s a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as “fake” just ignores the trajectory we’re on with AI development. If we wait until it’s real to start thinking about risks, it might be too late.