Up to now, computers have been blamed only for unintentional failures. Crashes have led the list of computer malfunctions, signified by, “the computer is down.” But these too were unplanned, accidental events.
Now, after years of investigation in many countries, Volkswagen has finally come clean about their dirty diesels. More than merely admitting guilt, they pointed their corporate finger at the actual culprit: the software did it.
Not only acknowledging that the behavior was not accidental but intentionally criminal, they went public, throwing the offending software under the worldwide VW bus.
In one sense, the biggest aspect of this story was the further confirmation that the biggest companies feel compelled to cheat. Are they so insecure, or do they think they’re too big to be caught?
The most surprising aspect to the story for most people is learning just how sophisticated software can be when you want it to cheat. These engines ran clean only when they were being tested for clean emissions.
The strangest aspect to the story is what made them think they could get away with it? As long as testing was required, the software would cheat. Did they not think that eventually it would be found out?
Or did they think they were just buying time? That they could eventually find a fix—in hardware or software—that would produce clean engines? And then replace the cheating software?
The most disturbing aspect to the story is realizing that software is sophisticated beyond our imagination. What are the odds that such cheating can be detected, much less brought to justice?
The most obvious aspect to the story is why do they test in such a way that programs can cheat? Is there no method equivalent to the random drug test for humans? Which brings up another set of questions.
When will we see nano-bots doing that drug testing? Then, how long before someone creates software to cheat on its programming? And the obvious final question, how do we test the testers and their tools, i.e., their testing software?