• Sarah@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    “Under South Korea’s laws, companies must ensure there is human oversight in so-called “high-impact” AI which includes fields like nuclear safety, the production of drinking water, transport, healthcare and financial uses such as credit evaluation and loan screening.”

    It’s crazy that everyone else hasn’t introduced similar laws already.

    I guess we will wait for something to go wrong first…

    • applebusch@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      It’s incredible we don’t already have laws governing companies attempting to offload accountability to computers and software. In this sense an LLM isn’t really special at all, it’s just another form of software, but because techbros have called it AI there’s this implicit assumption it can be responsible… A human is always responsible in the end, whether it’s the simplest chatbot or the most overbuilt wasteful chatbot ever conceived or a phone charger or whatever.

    • Ziggurat@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      s crazy that everyone else hasn’t introduced similar laws already.

      Typically, there is already relatively big regulation on software used for critical activities. You can’t just deplpy a new software to process medical images or regulates the engine power of a plane.

      Did the law needed to be uptaded in the era of AI most likely. However, I doubt you could have used a “AI” whatever it is, without any paperwork

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I just kind of assumed people would be doing that already… But then I see how Becky from HR makes use of AI and, yeah, we need regulations for even the most obvious things.