• BillyClark@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    4 hours ago

    Forgetting AI for a moment, I am always shocked when I am reviewing a coworker’s code and it’s obvious that they themselves didn’t review it.

    Like, they sent me a PR that has a whole shitload of other crap in it. Why should I look at it when you haven’t looked at it? If you don’t review your own review requests, you’re a failure of a programmer human.

    And I would be a failure if I approved such a request.

    Getting back to the post, where is all of the review? The coworker should have reviewed the AI shit, whether it was code or documentation. The person who approved the PR should have reviewed it, as well.

    Every business with more than one programmer should have at least two levels of safeguards against this exact thing happening. More if you include different types of test suites.

    This post describes a fundamentally broken business, regardless of the AI angle, and so it’s good if everything is broken. With such a lack of discipline and principles, I say let the business fail.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      Yeah, we are falling into a little bit of this where I work right now. It’s a bit of a change of mindset to begin thinking that you can’t trust a PR even a little. Yes, you should be able to but humans are humans and we get lazy and trusting the magic pattern machine is gonna impact everyone’s life in a lot of ways

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 minutes ago

        That’s really not a good sign, though. A review process to check for basic sanity is just a bandaid fix for a lack of discipline, which ultimately requires more work to be done. So, the person that asked the magic pattern machine should review that code, as they should be deeper into the context of what needs to be done, and they know which parts of the code were generated and which parts they actually logically thought about.

    • Johnnyvibrant@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      3 hours ago

      “AI” isn’t artificial intelligence…LLMs are a chat bot with a database the size of the world and the resource usage of a country for every prompt.

      Its just another way for rich cunts to show off their small dicks and large wallets - look what my money can buy?

      How about solving world hunger and not playing with toys.

      These miscreants are fucking evil.

      History will not judge them well, or anyone who buys into their lies.

    • balsoft@lemmy.ml
      link
      fedilink
      arrow-up
      30
      ·
      6 hours ago

      I think LLMs are neat and useful tools in some circumstances. So I don’t hate “AI”, I hate the billionaires who are pushing it down our throats, or trying to replace us.

      • fartographer@lemmy.world
        link
        fedilink
        arrow-up
        24
        arrow-down
        1
        ·
        5 hours ago

        They’re not trying to replace you, they’re trying to devalue you. See how that’s different? You thought that they were indifferent towards you, but it turns out that they actually hate you. How fun!

        • mogranja@lemmy.eco.br
          link
          fedilink
          arrow-up
          8
          ·
          4 hours ago

          Exactly. They know they actually can’t fully replace programmers with AI, but they will pretend they could to get away with paying lower salaries.

      • schnurrito@discuss.tchncs.de
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 hours ago

        I don’t think that is a very helpful talking point either. Replacing workers with machines has been a thing for centuries and isn’t by itself a bad thing.

        Using unverified AI for anything actually business critical is a very bad management decision though and any company that does that deserves the consequences. Using AI as replacement for eg. stock photos or video CGI? That is not a bad idea at all.

        • 4am@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          It’s an unhelpful talking point to assume that the billionaires are doing anything because they think it’s good business, or that they need to be good at business.

          These assholes listen to a guy called Curtis Yarvin who wants to let humanity go extinct and create little kingdoms where they keep and experiment on the remaining people.

          It sounds like tinfoil hat shit, but it’s not. Not that I think this shit would ever fly, let alone work; but that doesn’t stop them from being foolish enough to desire it.

        • balsoft@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          That’s true, for me the main issue with any automation under capitalism is that it brings yet more power to the corpos and the billionaires and takes away power from the labor.

          In this particular case it also sucks because the end product of genAI is soulless slop, and video genAI is quite a power hog. I agree that it’s ok for assisting developers/writers/artists/video editors in boring repetitive tasks.

    • Jack@slrpnk.net
      link
      fedilink
      arrow-up
      17
      ·
      6 hours ago

      Same here, I just keep my phone off while off work. Let them hallucinate a solution.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    6
    ·
    5 hours ago

    I feel like my life is writing prompts to AI now. If you don’t fall in line, you’re basically out of a job. A human can’t keep up. If the goal was to completely ruin my desire to write code, they’ve succeeded.

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    6 hours ago

    The worst was one AI hallucinated but really was so perfectly following the pattern of all the ones we already had that it just looked right. When it didn’t work, I asked AI to implement it (opensource helm chart), and it said no. That is where the opportunity is. For things like helm charts and what not that are just wrappers, AI should really excel. We could have very consistent interfaces for things like that, and it would save a ton of time.