As I’ve said elsewhere, I’m a little older. I hear a lot about AI. I’m just trying to figure out what’s “good” AI, what’s “bad” and if there’s even a difference. I do know that there’s the whole stealing content to train AI bs going on, but is it deeper? Is there such a thing as good AI? Just trying to learn so I can be better person
do you mean types of ai as there is not a whole lot of difference. its actively under development and each one is trying to one up the other. granted some like grok are just reskinned and hyped of others. AI can give both good and bad results which is why it has to be used from a critical perspective. One has to evaluate and validate the response before using.
My opinion
The good: Large Language AI models are a really useful tool.
The bad: harms the environment, steals people’s work, and can be easily misused
Oh my. This is a huge can of worms—especially on Lemmy. There’s a lot of anti-AI hate on this platform. Almost to the point of it being a religion.
For reference, when people say, “AI” they’re usually talking about Large Language Models (LLMs) and other forms of generative AI (e.g. diffusion models that make images). Having said that, “AI” is an enormous topic of which LLMs are a small, but increasingly popular part.
Furthermore, when people here on Lemmy say, “AI” they’re normally talking about “Big AI” which consists of:
- OpenAI (ChatGPT)
- Microsoft (Copilot)
- Anthropic (Claude)
- Meta (Whatsapp, Facebook, Instagram, Llama models, and more)
- Google (Gemini and shittons of other things people don’t see and often don’t even have names people outside of Google would recognize)
- Amazon (because they’re hosting the data centers that power a lot of the other players and also do AI stuff on their own)
Is AI inherently bad or evil? No. It’s just the latest way of giving instructions to a computer. Considering that all computer programs are literally just instructions, an AI model is just a really fancy and often expensive way of performing the same function. Albeit with a lot more breadth and flexibility. Note that I didn’t say “depth”, haha.
The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware. There’s so much demand for their services that they’re having to build their own—often dirty, fossil fuel—power plants just to power it all.
A lot of the talk around data centers is based on myths. For example, generating an image with AI doesn’t use a liter of water. A study came out that no one actually read (beyond the summary) that stated that a really long conversation with an LLM could in theory use up half a liter of water, assuming the data center was powered by a fossil fuel power plant that was using water for cooling (as in, the heat dissipation required 0.5 liters of water from the cooling pond next to the power plant, not potable/drinking water).
LLMs do use up a lot of power though! People often assume this is from training the AIs (which I’ll get to in a moment) because everyone “knows” it’s a long, involved process that can take months (even with a $50 billion data center specifically made for AI). However, it’s actually all the people and businesses using AI that uses up all that energy. The biggest, most power-hungry step is “inference” which is the point where the LLM tries to figure out what you just asked of it.
The important point here is that AI is actually being used.* There’s real demand for it! It’s not just fools asking ChatGPT for strange pizza recipes. It’s mostly businesses using it for things like writing and checking code or investigating server logs for malicious activity or any number of very businessy IT things.
The demand for AI services is so great that they can’t build data centers fast enough. Big AI, specifically is having trouble keeping responses within satisfactory time windows. The business models are still developing but they’re actually not charging enough to make up for their spending in a lot of cases. Specifically, OpenAI and Microsoft are losing money like crazy, trying to compete.
I ran out of time… I’ll reply again about the copyright situation, training costs, and open weight (aka open source) models in a bit…
This is a well thought out comment and I agree with most of what you have to say.
The part about data center and water use needs a caveat though. Some of them (but not all!) use a massive amount of water (a google dat a center in oregon was found to have used 25% of the local water supply) and wastewater that comes from the plant could potentially just be getting dumped into the water supply. Companies that are lax in what they do with waste water are what concerns a lot of people. It’s a lot like how mining companies would leave behind tailings ponds, pits full of water filled with large amounts of toxic materials like lead and arsenic. Some companies are only using wast ewater to cool their systems though. Others use a closed-loop system which reuse the same water continuously and use much less water.
This article breaks it all down better than I could: https://www.fwpcoa.org/content.aspx?page_id=5&club_id=859275&item_id=130961
That em dash you used is very suspect… 😆
Before AI, I didn’t even know what an em dash was, it was basically something word (or other software) occasionally corrected my hyphens to. I learned about it because people realized AI uses it all the time and it seemed like a good replacement for all those damn parentheses I always use.
Didn’t end up using it much though.
It has been established that LLMs (aka “AI models”) have been trained using copyrighted data without the consent of copyright holders.
See:
I think AI is in a similar place as GMOs were 10 years ago. The technology isn’t inherently problematic but the main companies rolling it out seem to be doing so during a banner drop where the banner screams “I’m evil and I intend to burn this place to the ground.” We shouldn’t trust them because they’re practically telling us not to in the same breath they use to promote their products. I would say most of the main models available to the public fall under this boat.
Just like GMOs this doesn’t mean that there’s not some cool AI research being done, for ex. special models run by researchers to improve diagnostics or look for new antibiotics. It remains to be seen whether the cool stuff will have been worth whatever it is we lose.
Hmm, this is a topic that has been debated for years, I guess instead of writing my own summaries, it’s great to link you to some resources, outlining why modern AI (“LLM”/“GPT”) is controversial:
- https://en.wikipedia.org/wiki/Large_language_model#Safety
- https://en.wikipedia.org/wiki/Large_language_model#Societal_concerns
- https://en.wikipedia.org/wiki/ChatGPT#Limitations
- https://en.wikipedia.org/wiki/AI_slop
Note that some issues apply only given certain output (e.g. hallucinations), some depend on the usage (the decision to generate and publicize AI slop is made by human operators), whereas some issues are always present (e.g. huge environmental impact).
Regarding the there being a difference between good and bad AI or not: Some people argue that it’s always bad, some are bit more nuanced, some are competely blind/ignorant to the problem. Only those in the middle camp would necessarily see a difference.




