Current language models (misnomered as “AI”) are great at some things but fail at any real thinking.

Self-Driving works decently in predictable environments but anything outside of those limits can make it literally crash and burn.

Public road transportation by individuals just has too many cases where real decision-making is required.

Despite all their resources, I think they’ve given up. All the brilliant engineers and scientists have given up, because they know what we’ve suspected for a long time.

  • Kushan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    12 hours ago

    And every one of those decisions is more data they’ll feed the machine.

    There has to be a point where you’ve got enough edge cases and bizarre situations where you have the data to train the LLM on how to deal with them.

    I doubt our dystopian future is completely human-less, but you can probably plot a graph that projects the number of human interventions required for every thousand miles driven and that graph probably has a trend going downwards.

    Then you move to another city that has more edge cases and repeat the process. Google can afford the slow rollout, Tesla can’t.