Current language models (misnomered as “AI”) are great at some things but fail at any real thinking.
Self-Driving works decently in predictable environments but anything outside of those limits can make it literally crash and burn.
Public road transportation by individuals just has too many cases where real decision-making is required.
Despite all their resources, I think they’ve given up. All the brilliant engineers and scientists have given up, because they know what we’ve suspected for a long time.


Waymo palms off the hard decisions to a human in the Philippines.
New innovations in slave labor
And yet they still can’t move the car during an emergency.
https://www.yahoo.com/news/articles/next-level-dystopian-waymo-robotaxi-150000054.html
The people in the Phillipines can’t move the car, but their event response team can move the car at a very slow speed to get it off the road. E.g highway lane to shoulder.
Last I heard they claim to have never used it though outside testing it.
Maybe the overseas driver was on lunch break.
But trans people in Kansas can’t drive… How TF is that even legal?
And every one of those decisions is more data they’ll feed the machine.
There has to be a point where you’ve got enough edge cases and bizarre situations where you have the data to train the LLM on how to deal with them.
I doubt our dystopian future is completely human-less, but you can probably plot a graph that projects the number of human interventions required for every thousand miles driven and that graph probably has a trend going downwards.
Then you move to another city that has more edge cases and repeat the process. Google can afford the slow rollout, Tesla can’t.