that moment shows that
the car can only know what
it is trained to know
Adapted from the article, What Riding in a Self-Driving Tesla Tells Us About the Future of Autonomy, by Cade Metz, Ben Laffin, Hang Do Thi Duc and Ian Clontz (NY Times, Nov. 14, 2022).
Cade and Ian spent six hours riding in a self-driving car in Jacksonville, Fla., to report this story.
Tesla’s technology can work remarkably well. It changes lanes on its own, recognizes green lights, and is able to make ordinary turns against oncoming traffic.
But every so often, it makes a mistake, forcing testers like Chuck to intervene.
“That moment shows that the car can only know what it is trained to know,” Mr. Cook said of the sudden turn into the parking lot. “The world is a big place, and there are many corner cases that Tesla may not have trained it for.”
Experts say no system could possibly have the sophistication needed to handle every possible scenario on any road. This would require technology that mimics human reasoning — technology that we humans do not yet know how to build.
Such technology, called artificial general intelligence, “is still very, very far away,” said Andrew Clare, chief technology officer of the self-driving vehicle company Nuro. “It is not something you or I or our kids should be banking on to help them get around in cars.”
I like a lot of these sentences.
It is not something you or I or our kids should be banking on to help them get around in cars, was one.
And the line, the car can only know what it is trained to know, makes me think this article applies to a lot more than cars.