I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.
However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I’m playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.
I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it’s a bit hypocritical for people to get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could “steal” the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.


Well, you dismissed the lack of ethics of it all. Just because you do open source doesn’t mean everyone else does. And open source often acknowledges contributors, unlike LLMs. You can’t consent for other people.
It’s hideously destructive. Wastes electricity, wastes water, plays merry hell with anywhere the damned data centers pop up.
It’s unregulated and has already killed people. Multiple stories have come out where an LLM has encouraged suicide. Plus various dangerous outputs like the bleach as cake ingredient thing. Because…
It isn’t intelligent, it’s just a parrot. I’ll start paying attention when it can successfully count letters in words. So would you trust a random parrot that told you about something you know nothing about?
It doesn’t do a quarter of what it says. Translation should be its bread and butter and it can’t really manage that. There’s a reason the tech bros that hyped crypto are hyping this. Because they don’t actually know what it can or can’t do.
It’s approaching max efficacy for current techniques. More data is better in machine learning, but it’s finding the limit and it’s way closer than the scammers want to admit.
It’s destroying jobs before it can handle them. I’ve tried to use it before. I spent as much if not more time fixing its output than if I had done it myself. It gets to do my boilerplate sometimes now.
It’s making worse workers. All that time agonizing over a problem was spent learning how to do it at all. Now it shits out worthless garbage that the person doesn’t know what it does or how to fix it. Job security for me I guess.
It could be a useful technology, but the delusion that it’s capable of becoming AGI distracts from all the things it could be capable of if big companies actually tried to use them instead of the lazy implementations they’re chasing.
Edit: I also forgot that it entrenches racism and other bad behavior. If your corpus is full of racist shit, you get a racist robot. And racist assholes make it harder to fix that because they won’t acknowledge that such things are bad and that this badness can be taught* to robots.
Source: Data engineer