Auto Autonomous, Part Two
Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.
Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.
But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.
Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.
This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.
The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?
There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.
Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.
What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.
A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.
The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?