• CharlesDarwin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    This is why I don’t think it should be a critical component or a crutch, or worse, a stand-in for real human expertise, but only acting as another pair of eyes. Certainly grammar checkers and spelling checkers get things wrong, depending on the context.

    I use LLMs nearly every day on my job when programming, and holy shit, do they go wildly wrong so many times. Making up entire libraries/projects, etc…

    Frankly, I find it a bit terrifying to have these somewhere in the medical pipeline, if left unchecked by real human experts. As others have pointed out, humans often can and do make terrible mistakes. In some critical industries, things like checklists and having at least two people looking at things every step of the way does a lot to eliminate these kinds of (human-caused) problems. I don’t know how much the healthcare field uses this kind of idea. I would want LLM to be additive here, not substituting, and acting as a third set of eyes, where the first two (or N, where N > 2) are human, but we know how capitalism works - rather than working to improve outcomes, they want to just lower costs, so I could see LLMs being used as a substitute for what would have been a second pair of human eyes, and I loathe that idea.