Good news is that there are people out there who are trying to make an ethical AIs. It’s still an atom bomb (and as ethical) as you say, but at least it could be in the hands of those that actually value human well-being, not just their profit margin.
I think that this is good and all for a regular person end user that might want to use it for efficiency. But the main problem OP is stating is that there will be people who will not use it so ethically, and we may not have the ability to “opt out”, as it were.
True that. But I think it’s valuable that there are people trying to find ways to make it ethical, since there’s no way to put it back in the box either.
Good news is that there are people out there who are trying to make an ethical AIs. It’s still an atom bomb (and as ethical) as you say, but at least it could be in the hands of those that actually value human well-being, not just their profit margin.
This podcast had an interesting conversation on it: https://shows.acast.com/tantra-illuminated-with-dr-christopher-wallis/episodes/new-horizons-ai-neuroscience-awakening-with-ruben-laukkonen
And the research paper: https://arxiv.org/pdf/2504.15125
I think that this is good and all for a regular person end user that might want to use it for efficiency. But the main problem OP is stating is that there will be people who will not use it so ethically, and we may not have the ability to “opt out”, as it were.
True that. But I think it’s valuable that there are people trying to find ways to make it ethical, since there’s no way to put it back in the box either.
I can agree with that. As much as I’d prefer pressing the “delete all Ai in the world” button, it has some uses and doesn’t have to be predatory.