Digital Minefield

Why The Machines Are Winning

Archive for the tag “AI”

Oh, The Humanities!


In 1959, British scientist and novelist C. P. Snow gave a lecture titled “The Two Cultures.” He said British education was split into Science and the Humanities—and that the latter saw itself as superior.

His critique claimed the essentials of the Humanities were the central pillar of British education, and that most educated Brits thought the essentials of Science as little more than incidental.

The lecture became a widely read book, then a widely discussed controversy, and even another follow-up book. In less than sixty years, it is not only forgotten but the tables have completely turned.

Today not only is Science king, very few people (even outside of Science or Technology) see any value whatsoever in the Humanities. As I said in my post of July 20, 2015, “The nerds have won.”

However, having everything your own way is rarely the path to victory. I could just mention the name Midas, or point to the endless stories and fables meant to teach us a little wisdom.

I could give endless examples of how our technology could be improved by adding a human element. There are many related to programming in this blog. However, the big picture are the robots that will be built on the assumptions of artificial intelligence.

The intelligence sought by AI is abstract. AI scientists don’t see the distinct value of human intelligence. They think that somehow a machine can make decisions or solve problems without a concept of self, without consciousness—without empathy for humans.

Empathy is exactly what our current technology lacks. It can be learned directly from experience or indirectly from education. But it can only be learned, directly or indirectly, from humans.

Intelligence without empathy is merely data. How many times have you heard the phrase “thinking outside the box”? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Using imagination is box-free thinking.

Wikipedia defines “[I]ntelligence … [as] logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.” Yet, without imagination, all of these are useless.

Imagining how humans will respond is necessary for human-friendly technology. If we can apply our humanity, we can empathize how people will feel using a technological product or device. We can, if our science is balanced by our humanity.

Auto Autonomous, Part Three


Everyone refers to them as autonomous vehicles. Everyone is wrong. Why? Very simply, they are not autonomous. They are no more autonomous than iRobot’s Roomba vacuum cleaner.

Not everyone has a Roomba, but everyone knows it’s not autonomous. It’s a robot. The company’s website describes it as such, never using the word “autonomous.” What’s the difference?

A robot, says the Merriam-Webster dictionary, works automatically. Another good word might be “automaton,” something that acts as if by its own power. That’s a long way from being truly autonomous.

Actually, in the dictionary, it’s just two definitions away. In between is “automotive.” Then we have “autonomous,” defined as having the power or right to govern itself. Self? What self? These so-called autonomous cars have no more self then a Roomba does.

To emphasize my point, the word “autonomous” comes from the Greek “autonomos”meaning to have its own laws. Whose own? There’s no “who” here, it’s a machine. It’s a “what,” not a “who.”

Unlike that dramatic moment in the original Frankenstein film, no one will cry out, “It’s alive!” when the key is turned. The tissue, the hardware, will remain what it always was—dead.

Obviously, the solution has to be in the software. So, why does AI’s approach to intelligence not follow the only example we have, our own? Why does AI believe in a mythical “pure” intelligence, divorced from body, from emotion, from consciousness, from self?

An individual only becomes human (and intelligent) through the medium of other humans. However, AI prefers intelligence in isolation, as a philosophical ideal. No wonder they keep failing.

One thing for sure, saying these cars are autonomous makes them sound smarter than they really are. Do the promoters want to deceive themselves or us? Either way, they’re not that smart.

Since many really big companies are determined to roll out autonomous cars, I’m sure they will appear in many different forms. Where they’re likely to succeed is as taxicabs in cities.

I can see people using these regularly and still be unwilling to buy one. Unwilling or unable. While it may seem logical to the car makers that cars made by robots should be driven by robots, who’s left with a job to buy the cars?

Auto Autonomous, Part Two


Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Wither AI?


On the spectrum of smarts, what the pretenders to AI aspire to is cleverness. Why? Because it’s what they know, what they do, and therefore what they value (and think other people should also).

What they dismiss (because they can’t do, and therefore don’t value) are common sense and wisdom. The former is far more valuable than clever in our daily lives, and the latter invaluable for our future—as individuals and as a species.

Another reason common sense and wisdom are not valued is because they can’t be measured like IQ. To speak of smarts can only mean IQ—which is mere cleverness. We’d be better off with common sense or wisdom, both harder to attain.

The advocates (and acolytes) of AI, not only think super-clever will solve our all-too-human problems, they think it can solve them without our supervision. Not only wrong, but stupid.

By way of proof, I offer one man: John Von Neumann (1903-1957). JVN excelled in at least four areas: physics, mathematics, computing, and economics. In any one of these, his work achieved not only fame but proved him to be a unique genius.

As to which of the four was his greatest contribution, it’s hard to say but right now computing may be in the lead. Of course, that’s placing it above the development of the atomic bomb.

One of his lesser known books is Theory of self-reproducing automata. Machines making machines. It’s said its ideas led to the concepts of DNA (maybe that’s his most influential work.)

However, none of these are why I invoke him. Combine what JVN knew about computers (and their future), the brain (and AI), decision-making (Game Theory), and self-reproducing automata, and you’ll envision a dystopia worse than the Terminator’s.

Yet, he didn’t. Combine them, that is. He never saw AI as making decisions for us. Not at all. Here’s what he thought:

“. . . the best we can do is to divide all processes into those things which can be done better by machines and those which can be done better by humans and then invent methods to pursue the two.”

This I submit goes far beyond smart. More than clever, it’s actual wisdom. And I have to ask, why have we ignored it all this time? Why do we still listen to the pie-in-the-singularity-sky prophets?

Oh yeah, JVN also coined the term “singularity.” In his short life, he knew more than all these so-called smart guys combined. If we look to them for answers, then it is we who are unwise.

Post Navigation