Cars Go Forth
A nationwide system of self-driving cars will never control all driving for two reasons. First, the roadways keep changing requiring never-ending system updates. Second, too many destinations (your garage) are not in the system.
Dealing with the first situation of changing roads is straightforward, although decidedly not trivial. It must include slowly changing laws (speed limits and curbs) as well as the rapidly changing immediate conditions (slippery when wet).
The real problem for an automated nationwide system of self-driving cars is cars entering and leaving the system. Not only does it need to know—exactly—what the driver wants, it needs to know if the driver changes his or her mind.
Let’s assume entering the system can be controlled, i.e., if a car fails to enter there are no untoward consequences. Not for the system, but how about outside the system, e.g., access ramp backup? Yet, the bigger problem is getting cars off the system.
Suppose a parking garage is full. The system asks the driver to choose an alternative. What will it do if the driver does not respond? The car must act to keep the system moving, so will it choose? Or will the system see no response as an emergency?
Now we see the fundamental dilemma of a hybrid human-computer system. Not only is there the problem of human intervention (changing destinations) but of humans responding to the system (the car asks if driver wants to stop for gas).
Does the car know if the driver is paying attention? Or even conscious? You’ve seen messages flash on the screen too fast to read. If the car asks a question and waits for an answer, what does it do in the meantime? Will there be holding patterns?
Airlines solve complex human-computer interactions with highly trained pilots. Cars may be simpler, but the point of automated cars is easier driving. Can cars be smart enough for dumb drivers? More driver training isn’t an option—it won’t fly.