Smart Cars Again
Two posts ago I wrote about self-parking cars. I didn’t mention the problem of painted curbs, i.e., which colors (like red) restrict parking in what way (like never). The self-parking car may know where the curb is, but can it know what its color means?
The last post was about self-driving cars. We don’t expect automated vehicles to read road signs, but the system should know what laws apply and control cars accordingly. (A similar system for self-parking cars would know about curbs.)
It’s reasonable to suppose that a nationwide system of automated cars includes cars for hire, from leased to rental to taxies. Could a person enter a cab (having flagged it with a smart phone) and say, “follow that car”?
Actually, that is possible—if the system has been programmed for a dialog to determine exactly which car. Clearly, the capability of self-driving cars depends on the programming.
All programming is limited by two things. One is accuracy. Does it correctly perform what it was designed to do? The other limitation is the programmer’s imagination: the set of possible actions for whatever the system encounters.
Accuracy also depends on how complete the system is. Will it include all rural roads? Gravel roads? Roads only open part of the year? Unlikely. Also unlikely is a complete system right out of the box—it will arrive in stages and always be changing.
Can the system control the car when it leaves the street into your driveway or a parking garage? The latter might be programmed, but the former is off the grid. It’s private and not subject to the laws underlying the automated system.
Now we see a larger problem: the transfer of control between system and driver. Can the system ensure reliable transfers? A hybrid human-computer system is difficult to program, especially for safety. Are any risks acceptable?