I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.
However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I’m playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.
I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it’s a bit hypocritical for people to get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could “steal” the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.


I would definitely be curious to see the research on that. I do think there are dangers with regard to relying on AI too heavily, but as a complement to existing technologies, I don’t think it can hurt any more than a calculator hurts your ability to do math.
Here ya go.
I’ll give you a quick summary. It turns out spending time thinking improves your ability to think and vice versa. When you rely on LLMs to do your thinking for you, you become less skilled at thinking.
It’s important to remember that it doesn’t really matter how you, personally, use their product or think it should be used, it matters how it is used by large swaths of society. Don’t get fooled into promoting some billionaire’s tool to shift wealth further upward and further denigrate the working class in your quest to get out of spending 15 minutes searching for the right D&D character picture.
Ok, that’s one study. Here is a review of dozens of studies that shows a positive impact on education.
https://www.sciencedirect.com/science/article/pii/S2666920X25001699
The study you linked doesn’t just show a positive impact on education. That’s only half the study. The other half is about the negative impacts. That study is giving a full picture of ai use in the classroom, about where it helps and where it hurts. They created 6 categories for how ai is getting used in the classroom and explained the positive and negatives found in those studies for each category. Some categories see more benefits or more harm than others
Right. That’s my entire point though. There are some positives, and some negatives. The dialogue I have seen around AI has basically boiled down to “AI is killing the planet and making us dumber”, when the reality is actually a lot more nuanced.
The key issue here is how it’s being used and regulation. Ai has caused a lot of harm bc it is unregulated. Like, people have committed suicide or killed others bc of conversations with chatbots. And yes, in many of these cases there are pre-existing mental health concerns, but it’s still causing someone who is unhealthy but non violent to become violent. That’s really bad.
Currently, ai not being used in positive ways, when we’re looking at the broader use of ai. Sure, some people or small organizations may use ai for specific uses which it’s good for, but that’s not how it is for most ai use. A lot of ai is taking people’s jobs/promising employers that they will be able to fire half their workforce. Even in the positive example you gave of getting a character portrait, sure you could use an ai, but there are a lot of artists that are losing commissions because it’s cheaper for people to just use ai. So the artists aren’t technically losing their job; their work is just being devalued, which is very unfortunate bc—as people have said—ai generated images don’t have intention and care put into it. Ai literally can’t do that. True art, no matter the skill level it’s made at, is made to evoke emotions, to communicate something to the viewer/reader/audience. Ai can’t create true art bc it cannot think or feel; it cannot be deliberate
I hate ai bc currently ai is a horrible thing bc of how it’s being used the majority of the time. I think after all the ai hype has died down and companies look at how these ai tools can actually be used effectively then it will be more tolerable. But right now it’s just an investment bubble and an unregulated technology that has caused severe harm
Thanks for the response, I appreciate your perspective. Definitely a reasonable take overall. I very much agree on the regulation side. It is pretty mental how unregulated it has been, especially with some of the projections about the impact that their purveyors claim it will have on society.
I think it’s genuinely too early to say definitively one way or another, but anecdotally, the people I see educated with LLMs are a lot less capable than people who aren’t.
Only time will tell how badly we’ve allowed ourselves to get fucked over to benefit a few ultra rich companies.
Ok, so just to clarify you basing your hate off of anecdotal evidence and a fear of getting fucked over by corporations (which has been happening long before and completely unrelatedly to AI)?
Here’s another fun piece you can read.
The incoming AI apocalypse isn’t about Skynet drones or malicious AGI, it’s about creating generations of vastly less educated and cognitively deficient lower classes and restricting traditional education to the wealthier echelons of society, gatekeeping the poor out by cost alone, undoing decades of hard-fought socialist efforts to bring education to the masses.
Fully agree that children should not be exposed to ai content, and age restrictions would be warranted (which ironically I think a lot of people on the fediverse would not be happy about).
That’s the thing.
Your specific opinions on how LLMs should be used or not use don’t affect how they are being used.
A system is what it does, plain and simple, and LLMs look like they’re doing serious damage to our societies to really just benefit the wealthy.
The whole “doing serious damage” part is where I disagree. The damage attributed is usually due to the ultra wealthy and capitalism, not due to AI itself.
I fully believe that the US government plans on replacing teachers with AI. It is all part of a grand plan to eliminate the Department of Education and defund schools nationwide. Once this crisis comes into full view we will be presented with this plan and we will no longer have a choice unless you have enough money to send your kid to private school.
I don’t disagree, the current administration is that dumb. Hopefully their popularity continues to drop like a stone and in an election cycle we will be off dumb island.
Because using AI atrophies the part of your brain that handles critical thinking…
The more you use it, the less you notice how you can’t do things without it.
If AI worked, that would be normal. The problem is it’s just good at conning people into believing it.
That’s why you can’t realize if it ever takes off and people start using it, they’re going to make it shittier and more expensive.
But again, the people already relying on AI have lost the critical thinking to see that coming. It’s like a bus driver closing their eyes because a bridge is closed. The bridge is still closed, they didn’t solve any problems. They just don’t see it coming now.
What you’re doing is asking all the passengers why they’re still screaming if all they need to do is close their eyes…