I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.

However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I’m playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.

I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it’s a bit hypocritical for people to get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could “steal” the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    Mostly anti-intellectualism and ego, as far as I can tell. Also, conflating someone’s business practices with a technology.

  • BlindFrog@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    Chatgpt, list all instances where OP is trying to subvert people’s points with logical fallacies, & burn a couple hundred extra Wh while you’re at it, thanks. I’m sure it’d take less energy for me to do it, but nah

    This book is probably more worth ur time than this post: https://ia801605.us.archive.org/29/items/aiboba/aiboba.pdf It’s An Illustrated Book of Bad Arguments by Ali Almossawi

  • hesh@quokk.au
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    Kills the planet

    Steals from artists

    Widens inequality

    Puts people out of work

    Reinforces prejudices

    Makes us stupid

    Makes everything generic

    Blows up the economy

    Supports oligarchs

    Can’t be trusted, hallucinates and lies

    Overhyped & overpromised

    Can’t generate outside of its training data

    Is creating obscene surveillance state

    Used in weapons to kill

    Made computer components expensive

    Ruined the internet with slop

    Replaces human interaction

    Just annoying

    • peepeepoopoo@hilariouschaos.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      As a thought experiment I considered all of these points and here are my thoughts.

      Kills the planet

      Got me there. That stupid datacenter crap where they need 1000tb of ram and zillion 5090RTXs and an entire nuclear power plant just to generate a chocolate chip cookie recipe needs to fucking go. Self-hosted ai isn’t that bad though. You can still argue that running a self-hosted koboldcpp on a 10 watt raspberry pi ALSO destroys the planet but so does all technology. Imagine living with no A/C, no deodorant, no running water, no toilet paper, just to make the earth livable for an additional 100 years or whatever. Fuck that. I chose to not have kids so I’m still doing my part which is more than what the majority of the population can be arsed to do.

      Steals from artists

      I don’t really understand this argument despite being the most common anti-ai argument. What type of art is ai really capable of replacing humans on? Hentai and video game 3d model textures? It’s useless at making 3d models even to the most fanatic of ai worshippers. I can watch porn on pornhub for free and would never and have never commissioned a human artist to make porn pictures for me. Am I stealing from hentai artists by not commissioning them for their work and choosing other means of looking at boobs?

      Buying textures for your hand made 3d models only supports the corporation selling them and the original artists get very little if anything at all. Using ai to circumvent spammy price gouging for 3d model textures seems like a better way to fight back to me. Another point is that copyright trolls are always harassing random youtubers over bullshit claims which DOES destroy livelihoods. Using ai to create a unique illustration that isn’t registered in a copyright strike database when you REALLY weren’t going to pay a $20 license for some spammy corporate licensed art either way really seems like a legitimate use of ai to me.

      Another thing is memes even. I would 100%, absolutely, positively, never ever in a million years commission a human artist for the hundreds of dollars it usually costs to make an illustration for a meme in a shitpost I was trying to make. Yet people get out their torches and pitchforks anytime someone uses ai in a shitpost. I just don’t get it. It’s the “pirating software STEALS money from developers” argument all over again. Is it REALLY stealing if you WEREN’T going to pay for whatever it was otherwise? In 2018 the average person online was practically up in arms over how unfair copyright law is and everyone dropped it to hate ai instead. Seems a little too convenient is you ask me. I think a lot of people have been played.

      Widens inequality

      Employers using ai to screen out the applicants that aren’t desperate enough and are therefore less likely to submit to abnormally cruel or illegal terms could be an example of this. Employers in America generally have too many freedoms in the first place. We aren’t going to get out this downward spiral of wages not keeping up with costs of living without doing some stuff that would be really unpopular to all the powerful people in charge of it all. I’m not sure that they need ai to continue colluding together to treat us all like trash. It will eventually devolve into all-out violence if no one forces them to stop ai or not.

      Also facial recognition cameras, more about that further down.

      Puts people out of work

      I don’t have any good supporting or opposing arguments for this one because I don’t know of any strong examples. Ai is 1000% shittier than a human at any given task for 0% of the cost which is enough to keep an american corporation satisfied for most purposes at least in theory.

      Reinforces prejudices

      I’m not going to be like “provide examples or it doesnt count” because it’s lame and stupid when people do that but my best guess for this one is its talking about how ai can be used to reinforce white nationalist ideology online in bot swarms and stuff. An ai can generate pro christo-fascist propaganda just as much as it can generate pro-democracy propanganda. I wish we could harass christian nationalist type people online with ai but it seems to be only the bad guys doing it. Go on reddit and say anything positive about marijuana in any context besides “my grandma is dying of cancer and marijuana allows her to not be in pain”. You will have people telling you to grow up and stop being a piece of shit. Meanwhile, you can speak out in support of bombing poor people in the middle east and no one bats an eye. Why can’t we harass the piece of shit people with ai? I guess you got me on this one. It only is used for spreading christian nationalist ideology for some reason. But this COULD change.

      Makes us stupid

      A few days ago I used a self hosted ai to help write a python script to run object recognition on the cctv cameras for my home network and it only took an afternoon. It would have taken longer to do this if I truly had to figure out and research every little detail and function name myself but I still could have done it. Sure there was some incorrect stuff in it but fixing that was still faster than doing it from scratch. I used the time I saved to also program a graph that shows the temperature history on my weather station. Does this mean I am stupid?

      Makes everything generic

      100% true. In 2014 or so, you could find anything you wanted on the internet. Now every single webpage is one big nothing-burger. Would corporate enshitification alone have brought things to this point even without ai? Maybe so, maybe not. The point stands.

      Blows up the economy

      It definitely provides a coverup excuse for the systematic price gouging of essential microchips and computer components, sure.

      Supports oligarchs

      This is true. Using non self-hosted ai even without paying for it does support oligarchs. Look at Grok for example. It’s a blatant fascist ideology propaganda machine. The other bots probably do the same thing but more subtle. I bet if you asked chatgpt about marijuana, transgender rights or atheism it wouldn’t be supportive of it. Yet if you asked chatgpt to run an online bot harassment campaign to tell transgender people and marijuana users how big of a piece of shit they are, there would be little pushback and it would say things of suspiciously higher quality than it was the other way around. They’d probably quietly and temporarily switch it over to the paid model for that one to make it generate higher quality hate speech without charging you for it. I’m not going to try it though.

      Can’t be trusted, hallucinates and lies

      Sure. You can’t trust posts on the internet either. Sometimes I find it easier to do my research and differentiate between bad advice and not bad advice than it is to just start from nothing, but most of the seriously potentially useful stuff is usually banned from ai models anyway.

      Overhyped & overpromised

      I guess. See “Puts people out of work”. 1000% worse for 0% of the cost is a no-brainer to an american corporation. To cut down on backlash they probably have to pretend replacing customer support roles with bots is “actually better”.

      Can’t generate outside of its training data

      Some self hosted ai models are compatible with being connected to a websearch which means all the non-self hosted ones also have that. Then you have ai shifting through ai slop articles trying to guess which information is useful and which isn’t. The thought of making an ai sift through another ai bot’s poop is funny to me.

      Is creating obscene surveillance state

      This is the objectively worst part about the advent of ai. Ai powered facial recognition allows law enforcement to have an easier time tracking down and harassing the types of people that the dominant ideology (the christian nationalists) want removed from society. The fascists established a full-on 1984 and we fuckin’ let them. For this one reason alone, I believe the world would be better of if ai were never a thing.

      Used in weapons to kill

      Violence wasn’t invented until the first gun was invented after all. Not really. Maybe when the next american civil war happens, the good guys can have ai guided rockets or whatever too.

      Made computer components expensive

      I already elaborated on this, but yes. Spamming ai datacenters all over the place just to prevent houses from being built there to keep costs of living high means they have to fill them with overpriced video cards. To give credit where credit is due, this isn’t all on ai. Chip companies are purposefully scaling back production so they can make more money while doing less work. Meanwhile, the government is massively cutting back on medicaid because they think we are all worthless losers who don’t work hard enough and deserve to either die in prison over unplayable medical debt or live through suffering because there is lots of suffering in the bible and republicans want to make America more like the bible. It is an unreasonably cruel, unreasonably unfair double standard.

      Replaces human interaction

      I guess. Imagine getting swatted because you told your ai “friend” you were considering fleeing to a blue state and getting an abortion. Although religious fucknuts report their friends over this too.

      Just annoying

      If you get on any ai and give it a prompt like: generate a sensationalist shitpost of a news article titled “Why you should sell all your possessions and work 120 hours a week at your job instead and never take vacation because you deserve to live like that”. The result is just an average modern news article.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      13
      ·
      12 hours ago

      Again, this is a lot of hyperbole.

      Is AI killing the planet, or is capitalism and addiction to fossil fuels? If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?

      In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?

      How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?

      I could go through the entire list, but you get the idea. A lot of the “evils” of AI are actually just symptoms of deeper systemic issues that have nothing to do with AI itself.

      • hesh@quokk.au
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        12 hours ago

        Thanks for your reply. Here are my rebuttals:

        Is AI killing the planet, or is capitalism and addiction to fossil fuels?

        Capitalism was already killing the planet, but the rush to invest in AI has demonstrably accelerated it.

        If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?

        No. But thats not the scenario we are in.

        In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?

        Because artists are people with consciousness and feeling and the capability for novel thought. AI is not. Believing it’s doing the same thing as human thinking is being suckered by the hype.

        Even the people creating AI know it’s not “thinking”. They call AI that actually “thinks” AGI and believe they will someday create it by pushing AI further, as long as we give them all of our money (trust me bro).

        How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?

        This is a big one, but without guardrails it’s inarguably poised to hurt working people and enrich the powerful, which therefore drives further inequality (which yes, was already bad as a product of modern capitalism). And those guardrails are not in place, and will not be put into place if we just follow along as they want us to.

        • village604@adultswim.fan
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          8 hours ago

          Because artists are people with consciousness and feeling and the capability for novel thought. AI is not. Believing it’s doing the same thing as human thinking is being suckered by the hype.

          But AI is being used as a tool by humans to generate the images. It won’t do anything on its own.

          What it has done is allow people to get inspiration out of their head and into the physical world with a much lower barrier for entry than ever before.

          There are still people who don’t consider digital artists to be real artists because they use digital tools instead of physical ones. The hate for people using genAI is basically the same thing.

          There are a lot of valid criticisms of genAI, but this one in particular has always seemed silly to me.

          • hesh@quokk.au
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 hours ago

            If you tried to digest every piece of intellectual property ever created by humans for free, they would lock you up. But OpenAI and Meta get to do it, and sell you a subscription to the AI they created with it - Making Zuck and Altman richer than God by destroying the ability for artists to make a living, and making every bit of art created from now on a shitty derivative pasted together by an AI from the memory of human art. It’s an episode of Black Mirror.

            • village604@adultswim.fan
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              7 hours ago

              Not all genAI is OpenAI and Meta. There are ethically trained image generation models.

              People are conflating all generative AI with tech giants, which is a critique on capitalism, not the technology.

              The technology is actually quite amazing with regards to image generation.

  • TheAlbatross@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    45
    ·
    edit-2
    16 hours ago

    In my experience, people who use LLMs as educational tools… don’t actually learn very well. They think they are, but they don’t retain the knowledge nor do they seem able to infer from or apply the knowledge very well. There are even some early studies that are showing that using LLMs decreases cognitive ability, and considering how many kids and young people are using it to get their way though school and even higher education… I think we’re using AI to raise a generation of stunted minds. That’s going to be a bigger issue as time goes on and with the state of the world and who owns the LLMs… it looks like a grim, sad future thanks to this tech.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      12
      ·
      16 hours ago

      I would definitely be curious to see the research on that. I do think there are dangers with regard to relying on AI too heavily, but as a complement to existing technologies, I don’t think it can hurt any more than a calculator hurts your ability to do math.

      • TheAlbatross@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        22
        ·
        edit-2
        16 hours ago

        Here ya go.

        I’ll give you a quick summary. It turns out spending time thinking improves your ability to think and vice versa. When you rely on LLMs to do your thinking for you, you become less skilled at thinking.

        It’s important to remember that it doesn’t really matter how you, personally, use their product or think it should be used, it matters how it is used by large swaths of society. Don’t get fooled into promoting some billionaire’s tool to shift wealth further upward and further denigrate the working class in your quest to get out of spending 15 minutes searching for the right D&D character picture.

          • FearMeAndDecay@literature.cafe
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 hours ago

            The study you linked doesn’t just show a positive impact on education. That’s only half the study. The other half is about the negative impacts. That study is giving a full picture of ai use in the classroom, about where it helps and where it hurts. They created 6 categories for how ai is getting used in the classroom and explained the positive and negatives found in those studies for each category. Some categories see more benefits or more harm than others

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 hours ago

              Right. That’s my entire point though. There are some positives, and some negatives. The dialogue I have seen around AI has basically boiled down to “AI is killing the planet and making us dumber”, when the reality is actually a lot more nuanced.

              • FearMeAndDecay@literature.cafe
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 hours ago

                The key issue here is how it’s being used and regulation. Ai has caused a lot of harm bc it is unregulated. Like, people have committed suicide or killed others bc of conversations with chatbots. And yes, in many of these cases there are pre-existing mental health concerns, but it’s still causing someone who is unhealthy but non violent to become violent. That’s really bad.

                Currently, ai not being used in positive ways, when we’re looking at the broader use of ai. Sure, some people or small organizations may use ai for specific uses which it’s good for, but that’s not how it is for most ai use. A lot of ai is taking people’s jobs/promising employers that they will be able to fire half their workforce. Even in the positive example you gave of getting a character portrait, sure you could use an ai, but there are a lot of artists that are losing commissions because it’s cheaper for people to just use ai. So the artists aren’t technically losing their job; their work is just being devalued, which is very unfortunate bc—as people have said—ai generated images don’t have intention and care put into it. Ai literally can’t do that. True art, no matter the skill level it’s made at, is made to evoke emotions, to communicate something to the viewer/reader/audience. Ai can’t create true art bc it cannot think or feel; it cannot be deliberate

                I hate ai bc currently ai is a horrible thing bc of how it’s being used the majority of the time. I think after all the ai hype has died down and companies look at how these ai tools can actually be used effectively then it will be more tolerable. But right now it’s just an investment bubble and an unregulated technology that has caused severe harm

                • rabiezaater@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 hours ago

                  Thanks for the response, I appreciate your perspective. Definitely a reasonable take overall. I very much agree on the regulation side. It is pretty mental how unregulated it has been, especially with some of the projections about the impact that their purveyors claim it will have on society.

          • TheAlbatross@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            12 hours ago

            I think it’s genuinely too early to say definitively one way or another, but anecdotally, the people I see educated with LLMs are a lot less capable than people who aren’t.

            Only time will tell how badly we’ve allowed ourselves to get fucked over to benefit a few ultra rich companies.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              7 hours ago

              Ok, so just to clarify you basing your hate off of anecdotal evidence and a fear of getting fucked over by corporations (which has been happening long before and completely unrelatedly to AI)?

      • TheAlbatross@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        14 hours ago

        Here’s another fun piece you can read.

        The incoming AI apocalypse isn’t about Skynet drones or malicious AGI, it’s about creating generations of vastly less educated and cognitively deficient lower classes and restricting traditional education to the wealthier echelons of society, gatekeeping the poor out by cost alone, undoing decades of hard-fought socialist efforts to bring education to the masses.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          Fully agree that children should not be exposed to ai content, and age restrictions would be warranted (which ironically I think a lot of people on the fediverse would not be happy about).

          • TheAlbatross@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            8
            ·
            12 hours ago

            That’s the thing.

            Your specific opinions on how LLMs should be used or not use don’t affect how they are being used.

            A system is what it does, plain and simple, and LLMs look like they’re doing serious damage to our societies to really just benefit the wealthy.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              7 hours ago

              The whole “doing serious damage” part is where I disagree. The damage attributed is usually due to the ultra wealthy and capitalism, not due to AI itself.

          • Doomsider@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            I fully believe that the US government plans on replacing teachers with AI. It is all part of a grand plan to eliminate the Department of Education and defund schools nationwide. Once this crisis comes into full view we will be presented with this plan and we will no longer have a choice unless you have enough money to send your kid to private school.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 hours ago

              I don’t disagree, the current administration is that dumb. Hopefully their popularity continues to drop like a stone and in an election cycle we will be off dumb island.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        16 hours ago

        , I don’t think it can hurt any more than a calculator hurts your ability to do math.

        Because using AI atrophies the part of your brain that handles critical thinking…

        The more you use it, the less you notice how you can’t do things without it.

        If AI worked, that would be normal. The problem is it’s just good at conning people into believing it.

        That’s why you can’t realize if it ever takes off and people start using it, they’re going to make it shittier and more expensive.

        But again, the people already relying on AI have lost the critical thinking to see that coming. It’s like a bus driver closing their eyes because a bridge is closed. The bridge is still closed, they didn’t solve any problems. They just don’t see it coming now.

        What you’re doing is asking all the passengers why they’re still screaming if all they need to do is close their eyes…

  • CallMeAl (like Alan)@piefed.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    14 hours ago

    Reading through this thread and your responses gives the strong impression that you just want to argue while at the same time aren’t very well informed on the matter. Where you do respond its mostly whataboutism rather than actually addressing the comment you are responding to.

    Your post asks “Why do people hate AI?” and then goes on to validates many of the commonly heard reasons people have for hating AI. You end with a suggestion that if we could develop AI into something else in the future, it might be good.

    So it seems you already understand why people hate AI and are promoting an agenda rather than asking a genuine question.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      12 hours ago

      I gave positives along side the negative, which most people vigorously against AI (which seems to be all of the fediverse) refuse to acknowledge as positives. I do have an agenda, which is to try to understand why there is such a blind and vigorous hate for something I and a lot of people find quite useful, and which could be beneficial for productivity if people use it effectively.

      • CallMeAl (like Alan)@piefed.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 hours ago

        I do have an agenda, which is to try to understand…

        If your goal is to understand why people feel the way they do then why are you arguing with people and attempting to refute their responses instead of thoughtfully reflecting their concerns back to them to confirm if you have understood?

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 hours ago

          Are you saying I am being disingenuous in my intentions by making counter points in a discussion? Is reflecting people’s ideas back to them the only way to understand them?

  • manuremy@sopuli.xyz
    link
    fedilink
    arrow-up
    26
    ·
    16 hours ago

    I loathe AI for multiple, personal reasons;

    1. When I need to contact customer support of some sort, there is an AI bot that is no use and there are no real humans, because the AI is cheaper. I won’t get the help I need or it’s too difficult to reach.

    2. My mother language is a bit more difficult one and many stores (especially online) are starting to translate everything with AI and that makes the text absolutely incomprehensible. Hard or even impossible to understand even the basic descriptions or the manuals.

    3. Browsers have those forced AI-summaries when you try to look for something and those are often both wrong and impossible to turn off. (Or if it’s possible to turn off, they keep turning back on.)

    4. People I know are literally believing everything from those summaries and such and are very confidently wrong/misunderstanding whatever basic thing. It’s very annoying. (“Let’s ask STSÄTKEEPEETEE!”)

    5. Being parasocial online is becoming frustrating as I have been accused of being an AI bot on multiple occasions just because of the way I write in English. Knowing even basic grammar makes you a bot these days.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    17 hours ago

    It’s burning the environment down, destroying the shambles of the global economy, and being constantly shoved down everyone’s throats even though it’s only impressive to people who don’t understand it

  • Oka@sopuli.xyz
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    Imagine a shitty robot was just made available for free.

    The shitty robot replaces you at work. It performs way faster with worse results, but the company hires a robot “expert” that fixes the results just enough that the product appears to be working. (Its not). You are now starving.

    The shitty robot tells your kids that porn is a viable career path. And that they should kill themselves.

    The shitty robot starts showing up everywhere, in advertising, TV shows, customer support lines, schools.

    The shitty robot makes shitty art really fast, which people can sell or use how they want. Artists are now starving.

    Imagine the shitty robot is now interviewing you for your next job.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      15
      ·
      16 hours ago

      Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first, or did we develop it and tweak it to it’s full potential?

      I mean, even Linus Torvalds acknowledges the benefits ffs

      • Azzu@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        12 hours ago

        This clearly shows that you are willing to talk out of your ass. The early Internet was filled with some of the smartest people alive, the mass of shitty content did not arrive yet because it wasn’t accessible to the masses. At the beginning, the Internet was literally only scientists. A little later, it was only very open people not scared to try something new and excited about the future and about foreign cultures, with corresponding amazing content.

        Only after this initial period, when the internet became commonly used, did it turn shit. Stop trying to manifacture arguments for your position and truly do what you acted like setting out to do, try to be open and try to understand other people’s concerns.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          12 hours ago

          You’re acting like we are at the “beginning” of AI, when the early days of AI happened around the same time as the Internet started.

          In any case, your argument is less of an argument against AI and more or an argument against mass adoption and co-optation of technology by corporations. See enshittification.

      • ylph@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        12 hours ago

        Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first, or did we develop it and tweak it to it’s full potential?

        I have been on the internet since 1992, and the internet today is by far the shittiest and most scam infested it has ever been in my time (and I doubt it was worse in the 80s)

        Few things make me more depressed than thinking about the evolution of the internet, from where it started to where it is today.

        I don’t doubt AI will follow a similar path, except somehow it is already starting in a much worse place than the internet ever did, and the downside potential is far greater and terrifying.

      • Ada@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        15
        ·
        16 hours ago

        Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first,

        It wasn’t though… That was mid to late term internet

      • JustEnoughDucks@slrpnk.net
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        14 hours ago

        We did exactly the opposite of what you say, we monitized the scams and turned to entire internet into an scam-ad infested wasteland of the greed of 10 people at the cost of driving the rest of humanity to mindless addictions, grooming, and manipulating human psyche to eek out more money and mass-propagandize.

        The majority of the internet is 10x worse than it was 20 or 30 years ago and there are literally more bots than people, according to research.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          13 hours ago

          Oh, ok. That really makes me think AI is the villain and the fediverse is the bastion of morality and civility. /s.

  • Ada@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    8
    ·
    16 hours ago

    It will change society. It won’t improve skills.

    Studies already show the opposite at play. https://arxiv.org/pdf/2506.08872v1

    If the LLM could teach you how to code, but couldn’t do the coding for you, it would be a tool for improvement. But it isn’t used that way. Instead of saying “teach me how to code this”, people are more inclined to say “code this for me”.

    On top of that, they’re controlled by corporations who are not in the slightest bit interested in your welfare, privacy or economic success. They will invade your privacy, fuck over the environment, fuck over people and load their LLMs with propoganda and barriers that serve their political and social interests.

    And as a bonus, they’re a nightmare for the environment.

    Having said all of that. I agree, they are going to fundamentally reshape society. But it’s like the industrial revolution. Yeah, we ended up with a more efficient society, but it didn’t make people freer, it further entrenched wealth in the hands of the wealthy, whilst fucking up the environment. That’s what LLMs are going to do.

    We could do them differently. That implementation isn’t inherent in their nature. But we won’t do them differently, because the people pushing it want the shitty outcome, because it’s not shitty for them.

  • Devolution@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    16 hours ago

    AI makes children stupid. AI is mostly used for making AI slop. AI is being used by governments to manipulate public perception. AI is being used to engage is scams. People are using it to cheat. AI is being used to offset critical thinking.

    AI is being used by corporations to engage in mass layoffs to save a buck. AI is being used by police stations and federal agencies to identify people, with minimal success (misidentification). AI is being used to deny health claims without review. AI customer service is dogshit.

    I was a futurist like you once. I wanted AI based on how the movies presented it. However, the reality is LLMs are being used not for human improvement, but instead for the purpose of creating a permanent underclass with few at the top.

    TL;DR: fuck AI.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      17 hours ago

      My point is that there are downsides, but there are also upsides, just like anything else. The internet in general has dramatically increased electricity usage from what it was previously, but people are acting like AI is adding some unprecedented load on the grid, which in the vast majority of places it is not (despite what a lot of online discussion would have you believe). Any artist or coder uses the art or code of others for inspiration, and yet AI is evil for doing the same? It’s just a lot of negativity without acknowledging the benefit.

      • Alk@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        17 hours ago

        Seems like you know the answer to your own question and you’re just looking to argue and tell people who dislike it they’re wrong.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          17 hours ago

          I am looking for more in depth reasons. The person responded that I answered my own question, without further elaboration, and then you just said I already know the answer with no further elaboration. I am interested in discussion, and yet people are here telling me I’m not because I have some opinions of my own?

          • Typhoon@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            7 hours ago

            This thread is full of “in depth reasons” but you dismissed them all and demand better ones.

              • Typhoon@lemmy.ca
                link
                fedilink
                arrow-up
                2
                ·
                7 hours ago

                you dismissed them all

                There you go again.

                Your title and post sound like you’re trying to understand. You’re not here to understand. You’re here to argue.

                • rabiezaater@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 hours ago

                  My intent was to try to understand why people feel the way they feel. If I disagree with a reason someone has, am I just supposed to be like “oh, ok”, and move on? Is that the proper protocol here if I am supposed to be understanding? Am I not supposed to give any rebuttal to any points whatsoever and just read through the thread without replying? Is that what you would consider a true “understanding” approach?

          • Alk@sh.itjust.works
            link
            fedilink
            arrow-up
            14
            arrow-down
            1
            ·
            17 hours ago

            The discussion that you’re looking for doesn’t exist. The reasons you listed - people who don’t like AI simply think those reasons drastically outweigh the benefits. So much so that it’s not even worth discussing. There’s no deeper meaning to it.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              That’s a counterproductive and unhealthy dispotion to have. No topic should be considered “not worth discussing” particularly one so ubiquitous and impactful.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Certainly curious to see sources, but last I checked the industry was an insignificant contributor to electricity use overall. That number is obviously growing, but when I say insignificant, I mean negligible.

          Again though, happy to see data that shows otherwise it you have a source to provide.

          • BassTurd@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 hours ago

            Energy prices are already rising for people near data centers. Near me, the state is reopening a nuclear plant closed because the maintenance to repair years of use exceeded its value. With DCs moving in, it is now a necessity. As you mentioned, it’s getting worse, but it’s already having an effect.

            There’s also the insane water consumption required to cool their stuff. It’s destroying ecosystems everywhere.

            From a selfish-ish point, it’s also already increased costs of chips used in RAM and GPUs and has taken stick off the market for consumers.

            As far as your open source views, that’s great, but open source projects still get licensed under open source licensing and is at the behest of the creator. Ignoring that is effectively stealing from creators and that’s not okay. It’s one thing to learn from something, and then to cite those sources, and it’s another to take it, regurgitate it and not give credit. AI has been used to impersonate people in music and other media like content creators hurting their income and image with no recourse.

            AI is ass at coding. It’s not a good teacher and struggles with any level of complexity. It is ok for troubleshooting, but it has been shown in almost every case that it’s not capable of replacing even junior devs effectively. AWS just had a coming to Jesus moment recently because AI generated code broke critical services and took down services that millions rely on. It’s not security conscious, and there are breaches of personal data left and right.

            I’m not saying all uses of AI are the devil. It has it’s place for minor tooling, but the ethical implications mentioned above are just what I care to spend time elaborating on, but there are many more.

      • T00l_shed@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        17 hours ago

        There are upside. Specifically in the medical field, and that’s where it should stay. Everything else is a downside. Humans taking inspiration is one thing. Humans copying directly is called plagiarism, the ai shit is plagiarism. Its taking water from people. Its using more electricity, that’s why they want to build nuclear plants to power them, the tax payer will ultimately foot the bill. Its eating up all the consumer hardware. Driving up costs. Making people stupider. Shoved down our throats everywhere. It hallucinates like mad, its costing people jobs. Its all downsides, except for a niche use case.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          9
          ·
          17 hours ago

          Your response is a perfect example of the hyperbolic hate against AI I see.

          Why exactly should AI stay in the medical field, but be restricted from every other field? What about medicine makes it worth using, but makes it useless and evil in every other field?

          AI does not copy. It takes things as inspiration and synsthesizes it into something new, just like anyone else. I know an artist who worked for the simpsons. He does a lot of his own work now in the simpsons style. Is he plagiarizing Matt Groening because he is using that style? Or is he just inspired by his time working there?

          No, AI is not taking water from people. AI uses a very small amount of the total water, and industrial agriculture uses thousands of times more water overall. The ogalala aquifer and Colorado River started drying up before AI even came to be.

          From here out are all actually pretty good reasons. The hardware thing sucks, and I wish the government was not so in bed with these companies that they let them just hoard all the hardware like they do. I think the internet in general has been making people stupider for a while, but AI has certainly accelerated it. The hallucination is annoying as shit, but it has gotten better over time, to a certain extent.

          Definitely acknowledge all those down sides. Again though, it’s like any technology that has pluses and minuses. I guess the question is whether we should throw out all the benefits and assume the negatives are unfixable, or should we try to look at how we can solve them?

          • T00l_shed@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            16 hours ago

            The hate isn’t hyperbolic. Its justified as all the reasons I gave. Just because you see AI as a net benefit, you think my hate is hyperbolic. You’re argument of using “little total water” and “agg using more” are flawed arguments. Its using precious water that we dont have enough of, for useless ai slop. Yes agg uses a huge amount of water, but we need to eat.

            Ai doesn’t take inspiration, its a computer program. Inspiration is uniquely human. Someone who draws in the style of matt G is one thing, a machine that copies the style is not “inspired by”

            Yes it should stay in the medical field because that’s where there is a net benefit.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 hours ago

              So you’re saying that medical is the ONLY single place where the pros out weight the cons? You said we need to eat, so what about agriculture? What about science/engineeing in general? Why the arbitrary line in the sand at medical?

              Also, if AI is so faulty and flawed, why would you want to use it in situations where lives are on the line, but condemn it’s use for lower stakes situations

              • T00l_shed@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                6 hours ago

                So you’re saying that medical is the ONLY single place where the pros out weight the cons?

                Yes

                You said we need to eat, so what about agriculture? What about science/engineeing in general? Why the arbitrary line in the sand at medical?

                I said we need to eat in reference to water use.

                also, if AI is so faulty and flawed, why would you want to use it in situations where lives are on the line, but condemn it’s use for lower stakes situations

                Do you know how ai is used in the medical field?

                • rabiezaater@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  I don’t know the specifics, but not sure how that is relevant. Why does the field it’s being applied in make a difference? Is medicine the only field you view as truly impactful and valuable? Or do you really view the down sides as that dramatically terrible that the only possible way those downsides could be justified is by saving a life?

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    12 hours ago

    Judging by the comments, I would say that most Lemmy users are aware of the downsides of LLMs. The average GPT user probably hasn’t heard of half the points mentioned in these comments.
    Judging by the downvotes, I would say that many Lemmy users are also very passionate about it. The average GPT user might think of LLMs like any other tool.

    Unfortunately, I get the feeling that Lemmy isn’t a suitable place for having a serious conversation about AI in general (not just LLMs). I would love to have that conversation, but this just isn’t the place for it, as you can see. The people here seem to be too focused on LLMs, how they’re developed and how they’re forcibly implemented in places where they provide zero value etc. AI in general is such a broad category, and this kind of biased conversation misses 90% of it.

    When you say AI, people hear LLM, and that’s a genuine problem. When people say they hate AI, they probably aren’t thinking of things like image search, optical character recognition, automatic categorization of the events of your bank account, signal processing in audio and video, image upscaling, frame generation, design of 3D structures, route planning etc. There’s so much you can do with AI, but Lemmy users rarely mention those.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      12 hours ago

      Yea, I am really getting disillusioned with the discussions on the fediverse around a lot of important topics, not just AI. I could picture a response from someone in this thread as “good, fuck off AI shill”. Not a very productive or healthy place for a discussion, as much as I support the goals and motivations behind the fediverse. Apparently there is an anti-ai zealotry that makes real dialogue impossible.

  • NABDad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 hours ago

    What do you mean when you say AI?

    Are you talking about all the different areas of research or just LLMs?

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      Both. I think people don’t even realize that there are non llm AI applications, and it has done a disservice to the field in general.

      • NABDad@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        LLMs are interesting, and there are some very promising applications, but I’m concerned that the hype is going to damage the reputation of the technology in a way that could interfere with those things.

        Regarding all other AI, there’s a lot of good that has come from AI research, and most people don’t recognize it. We have a tendency to shift our definition of “intelligence” to always exclude things that someone figures out how to get a computer to do.

        Every day we use software that would have been considered AI years ago.

        I’m not against AI, but I’m against the capitalist impulse to squeeze money out of anything to the detriment of all of humanity and the world.

        My hope is that the LLM bubble bursts, big companies suffer terribly, the “AI” tag becomes bad marketing, and they let AI quietly return to research, where people can do some good with it.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          I appreciate your distinction between capitalism and AI. Many attribute the maladies of hyper late stage capitalism (enshittification, data hoovering, algorithmic engagement tuning, etc) to AI, when one is just a symptom of the other.

          I agree on the overhype and hope for the industry. I do not want LLMs to go away, and there are plenty of open source non commercial LLM projects out there. I look forward to the day when I can just download a local LLM assistant that has all the capabilities of the best models today. Once someone figures that out, I think the corporations who have poured hundreds of billions into massive data centers will start collapsing.

  • maniclucky@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    16 hours ago

    Well, you dismissed the lack of ethics of it all. Just because you do open source doesn’t mean everyone else does. And open source often acknowledges contributors, unlike LLMs. You can’t consent for other people.

    It’s hideously destructive. Wastes electricity, wastes water, plays merry hell with anywhere the damned data centers pop up.

    It’s unregulated and has already killed people. Multiple stories have come out where an LLM has encouraged suicide. Plus various dangerous outputs like the bleach as cake ingredient thing. Because…

    It isn’t intelligent, it’s just a parrot. I’ll start paying attention when it can successfully count letters in words. So would you trust a random parrot that told you about something you know nothing about?

    It doesn’t do a quarter of what it says. Translation should be its bread and butter and it can’t really manage that. There’s a reason the tech bros that hyped crypto are hyping this. Because they don’t actually know what it can or can’t do.

    It’s approaching max efficacy for current techniques. More data is better in machine learning, but it’s finding the limit and it’s way closer than the scammers want to admit.

    It’s destroying jobs before it can handle them. I’ve tried to use it before. I spent as much if not more time fixing its output than if I had done it myself. It gets to do my boilerplate sometimes now.

    It’s making worse workers. All that time agonizing over a problem was spent learning how to do it at all. Now it shits out worthless garbage that the person doesn’t know what it does or how to fix it. Job security for me I guess.

    It could be a useful technology, but the delusion that it’s capable of becoming AGI distracts from all the things it could be capable of if big companies actually tried to use them instead of the lazy implementations they’re chasing.

    Edit: I also forgot that it entrenches racism and other bad behavior. If your corpus is full of racist shit, you get a racist robot. And racist assholes make it harder to fix that because they won’t acknowledge that such things are bad and that this badness can be taught* to robots.

    Source: Data engineer