Do you have any ideas or thoughts about this?

  • lacaio 🇧🇷🏴‍☠️🇸🇴@lemmy.eco.brOP
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    1 day ago

    I mean, agentic AIs are getting good at outputting working code. Thousands of lines per minute; talking trash of it won’t work.

    However, I agree that losing the human element of writing code is losing a very important element of programming. So, I believe there should exist a strong resistance against this. Don’t feel pressured to answer if you think your plans shouldn’t be revealed, but it would be nice to know if someone is preparing a great resistance out there.

    • planish@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      23 hours ago

      This is honestly a lot of the problem: code generation tools can output thousands of lines of code per minute. Great, committable, defendable code.

      There is basically no circumstance in which a project’s codebase growing at a rate of thousands of lines per minute is a good thing. Code is a necessary evil of programming: you can’t always avoid having it, but you should sure as hell try, because every line of code is capable of being wrong and will need to be read and understood later. Probably repeatedly.

      Taking the approach to solving a problem that involves writing a lot of code, rather than putting in the time to find the setup that lets you express your solution in a little code, or reworking the design so code isn’t needed there at all, is a mistake. It relinquishes the leverage that is very point of software engineering.

      A tool that reduces the effort needed to write large amounts of human-facing, gets-committed-to-the-source-tree code, so that it’s much easier and faster than finding the actual right way to parse your problem, is a tool that makes your project worse and that makes you a worse programmer when you hold it.

      Maybe eventually someone will create a thinking machine that itself understands this, but it probably won’t be someone who charges by the token.

      • FreedomAdvocate@lemmy.net.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        This is why Pull Requests and approvals exist though. If I am reviewing a PR and it takes 400 lines of code to do something that should be 25 lines, I’ll pick that up in my review, leave feedback, and send it back.

    • Apepollo11@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      It’s just a greater level of abstraction. First we talked to the computers on their own terms with punch cards.

      Then Assembly came along to simplify the process, allowing humans to write readable code while compiling into Machine Code so the computers can run it.

      Then we used higher-level languages like C to create the Assembly Code required.

      Then we created languages like Python, that were even more human-readable, doing a lot more of the heavy lifting than C.

      I understand the concern, but it’s just the latest step in a process that has been playing out since programming became a thing. At every step we give up some control, for the benefit of making our jobs easier.

      • theparadox@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        I disagree. Even high level languages will consistently produce the same results. There may be low level differences depending on the compiler and the system’s architecture but if those are consistent you will get the same results.

        AI coding isn’t an extremely human readable higher level programming language. Using an LLM to generate code adds a literal black box and the interpretation of the user and LLM’s human language (which humans can’t even do consistently) to the equation.

        • Apepollo11@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          15 hours ago

          That’s fair, but I’m not arguing that it’s a higher-level language. I was trying to illustrate that it’s just to help people code more easily - as all of the other steps were.

          If you asked ten programmers to turn a given set of instructions into code, you’d end up with ten different blocks of code. That’s the nature of turning English into code.

          The difference is that this is a tool that does it, not a person. You write things in English, it produces code.

          FWIW, I enjoy using a hex-editor to tinker around with Super Famicom ROMs in my free time - I’m certainly not anti-coding. As OP said, though, AI is now pretty good at generating working code - it’s daft not to use it as a tool.

          • theparadox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I don’t think it’s at the point where it helps people code more easily, but maybe I’m just exclusively experiencing edge cases and turning to it for the wrong uses. I’ve only had failures. Hallucinations that waste my time, and flawed algorithms.

            My favorite was a few weeks ago when I was having a rough day and needed a complicated algorithm to make a decision based on an inputted date. I told it that if I plug in value A to its algorithm, the answer is wrong. It went step by step explaining its "reasoning"and it returned the correct answer and then at the pivotal step it plugged in a different year than was in A, for just that step, and then proceeded to confirm to itself that if you plug in A, you get the right answer.

            Maybe someday it will help, or maybe some problems it is useful for, I’ve just never had that experience.