Digital Minefield

Why The Machines Are Winning

Archive for the category “1. Rise of the Machines”

Oh, The Humanities!


In 1959, British scientist and novelist C. P. Snow gave a lecture titled “The Two Cultures.” He said British education was split into Science and the Humanities—and that the latter saw itself as superior.

His critique claimed the essentials of the Humanities were the central pillar of British education, and that most educated Brits thought the essentials of Science as little more than incidental.

The lecture became a widely read book, then a widely discussed controversy, and even another follow-up book. In less than sixty years, it is not only forgotten but the tables have completely turned.

Today not only is Science king, very few people (even outside of Science or Technology) see any value whatsoever in the Humanities. As I said in my post of July 20, 2015, “The nerds have won.”

However, having everything your own way is rarely the path to victory. I could just mention the name Midas, or point to the endless stories and fables meant to teach us a little wisdom.

I could give endless examples of how our technology could be improved by adding a human element. There are many related to programming in this blog. However, the big picture are the robots that will be built on the assumptions of artificial intelligence.

The intelligence sought by AI is abstract. AI scientists don’t see the distinct value of human intelligence. They think that somehow a machine can make decisions or solve problems without a concept of self, without consciousness—without empathy for humans.

Empathy is exactly what our current technology lacks. It can be learned directly from experience or indirectly from education. But it can only be learned, directly or indirectly, from humans.

Intelligence without empathy is merely data. How many times have you heard the phrase “thinking outside the box”? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Using imagination is box-free thinking.

Wikipedia defines “[I]ntelligence … [as] logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.” Yet, without imagination, all of these are useless.

Imagining how humans will respond is necessary for human-friendly technology. If we can apply our humanity, we can empathize how people will feel using a technological product or device. We can, if our science is balanced by our humanity.

Robo-Management


Once upon a time (preemptive pun), there was a genius named Frederick Winslow Taylor. Equipped with a clipboard and stopwatch, he revolutionized office and manufacturing procedures in the early part of the last century. (The Principles of Scientific Management, 1911.)

I learned this as a young teen by reading Cheaper By The Dozen. The book was about applying time-study methods to life with the twelve children of the husband and wife efficiency team of Frank and Lillian Gilbreth. (The 1950 movie starred Clifton Webb; the remake in 2003 starred Steve Martin.)

Fifteen years later, I learned of another goal, effectiveness, from the top management guru, Peter Drucker. Taylor preached efficiency, but effectiveness was more important. Yet, many organizations prefer efficiency over effectiveness.

In Taylor’s day, efficiency was symbolized by the stopwatch. Today’s efficiency is a quantity that can be measured more accurately by computers. Effectiveness is a quality determined by humans making value judgments.

Efficiency is easy to measure; it’s what is happening now. It’s harder to measure tomorrow’s consequences of today’s actions. Effectiveness is about judging consequences. It requires humans to make those judgments. Efficiency can be reduced to numbers churned out by computers.

Computer numbers are easy to acquire, super-fast to calculate, and can be analyzed a million different ways. The human judgments necessary for effectiveness are hard to acquire, slow to evaluate, and difficult to analyze.

In discussing the woes of modern workers, two companies are manipulative in the extreme: Walmart and Amazon. Their success is built on the diminution of human margins.

Is it any wonder that companies like these are using the computer as a modern stopwatch? In the name of efficiency, they’re pushing their workers to act like machines. To what end?

Using Taylor’s Scientific Management, companies are reshaping human jobs to better fit the robot workers of tomorrow. You could say the jobs are being tailored to suit the robots. (Begin with a pun; end with a pun.)

Auto Autonomous, Part Three


Everyone refers to them as autonomous vehicles. Everyone is wrong. Why? Very simply, they are not autonomous. They are no more autonomous than iRobot’s Roomba vacuum cleaner.

Not everyone has a Roomba, but everyone knows it’s not autonomous. It’s a robot. The company’s website describes it as such, never using the word “autonomous.” What’s the difference?

A robot, says the Merriam-Webster dictionary, works automatically. Another good word might be “automaton,” something that acts as if by its own power. That’s a long way from being truly autonomous.

Actually, in the dictionary, it’s just two definitions away. In between is “automotive.” Then we have “autonomous,” defined as having the power or right to govern itself. Self? What self? These so-called autonomous cars have no more self then a Roomba does.

To emphasize my point, the word “autonomous” comes from the Greek “autonomos”meaning to have its own laws. Whose own? There’s no “who” here, it’s a machine. It’s a “what,” not a “who.”

Unlike that dramatic moment in the original Frankenstein film, no one will cry out, “It’s alive!” when the key is turned. The tissue, the hardware, will remain what it always was—dead.

Obviously, the solution has to be in the software. So, why does AI’s approach to intelligence not follow the only example we have, our own? Why does AI believe in a mythical “pure” intelligence, divorced from body, from emotion, from consciousness, from self?

An individual only becomes human (and intelligent) through the medium of other humans. However, AI prefers intelligence in isolation, as a philosophical ideal. No wonder they keep failing.

One thing for sure, saying these cars are autonomous makes them sound smarter than they really are. Do the promoters want to deceive themselves or us? Either way, they’re not that smart.

Since many really big companies are determined to roll out autonomous cars, I’m sure they will appear in many different forms. Where they’re likely to succeed is as taxicabs in cities.

I can see people using these regularly and still be unwilling to buy one. Unwilling or unable. While it may seem logical to the car makers that cars made by robots should be driven by robots, who’s left with a job to buy the cars?

Auto Autonomous, Part Two


Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Auto Autonomous, Part One

Strange week. All kinds of items related to autonomous machines appeared from many different sources. Some were cars, some were trucks, and some were even weapons. Along with stories about super-intelligent computers, it was a chilling week.

First was a tiny link in my AAA magazine about the history of autonomous vehicles. For example, “1939: GM’s World’s Fair exhibit predicts driverless cars will be traveling along automated highways by 1960.”

The link also had this entry, “2035: By this date, experts predict 75 percent of cars on roadways will be autonomous.” Near by in the magazine was an article on the latest muscle car. Wonder how those will get along with autonomous cars.

On PBS this week, I learned about autonomous trucks and weapons (two separate stories). Driver-less semis are scary enough, without thinking about weapons deciding who’s a target.

I apologize if this is too much information, but I have more. In a word: taxicabs. Autonomous vehicles that will pick you up and deliver you to your destination. Didn’t we see that in the first Total Recall movie? After hearing about trucks and weapons, sounds very reasonable, doesn’t it?

What’s not reasonable is the talk about super-intelligent machines. It’s not coming from the people who want you to be passive passengers. No, it’s coming from those who can’t wait to worship the machine.

This attitude is rarely found among those studying artificial intelligence (AI) or those who are working to implement it. Rather, it comes from philosophers, pundits, and self-proclaimed futurists who know a little about AI and less about computers.

Led by Ray Kurzweil of Singularity fame, these predictions are based on a single insight known as Moore’s Law. It says the number of transistors on a chip (integrated circuit) doubles every two years. Ray, et al, claim this means computers are becoming exponentially more powerful.

They fail to comprehend the Law only applies to the hardware side of computers. Software is another kettle of badly-cooked fish. No one is foolish enough to suggest software is similarly improving.

Don’t take my word for the state of AI. Listen to an actual AI expert. Here’s the TED talk of Dr. Fei-Fei Li — Director of the Stanford Artificial Intelligence Lab.

Smart Streets?


Last week’s post asked how smart were these automated cars being hailed as saviors of our highways. I asked many questions, all presuming these cars were autonomous—because that’s how they’re being promoted.

Well, they’re not. Basically, they’re mobile computers and no computer these days is independent of the Internet, or if you prefer, The Cloud. Even your stationary desktop computer gets constant updates from its various hardware and software makers.

Any automated car will be no different and therein lies a whole new set of questions. To what degree are they independent and to what degree are they connected to (controlled by) The Cloud?

Aside from the usual updates for its hardware and software, an automated car needs current information about the streets it’s navigating, not to mention its destination. (Hence the title.)

These cars need The Cloud for updates about traffic, road conditions, and even the roads themselves. It might be possible to load all the roads into the car’s computer, but is it likely?

Point being, there are continual updates to the whole system of roads, but only rarely to your localized region of actual driving. Updating a car with information on all roads is wasteful, and it could be dangerous.

How to update what data will determine the dependency of vehicles on The Cloud and therefore the Internet. If connections go down—even for a minute—it doesn’t mean one car is on its own. Rather all cars in this vicinity using the same connection will be left on their own. This gives us new questions.

Can these automated vehicles be sufficiently autonomous if they lose their Internet connection? Think fail-safe. And don’t assume that simply stopping (or even pulling over to the side of the road) will always be the right option.

The makers who propose these vehicles are big on showing us how these cars avoid obstacles. But the real value of automated cars is controlled traffic flow. That takes coordination, which raises a new set of questions.

There’s the problem of autos from different manufacturers. Or will the government step in and choose a single supplier, or at the very least a single computer system to be used by all?

If there are different manufacturers, will they use the same data? Supplied by whom? (Is all this just a power play by Google?) If they do use the same data, will they all update at the same time?

The more I look at this, the more questions I have. My biggest question is: Are the people selling this concept and those who will have to approve it asking the same questions?

Street Smarts?


What is smart? Does the automated car they tell us is almost here qualify as smart? It’s pretty smart if it can steer itself and avoid obstacles. It’s very smart if it can recognize lane markings and traffic lights. How about reading street signs?

We know cars are smart enough to park themselves. What about NYC’s famed alternate side of the street parking? How smart does this car have to be for you to trust it with your life? The lives of your loved ones?

Living creatures are smart because they adapt to changes in their circumstances, e.g., the three-legged dog. Computers (and other machines) cannot. They are limited to their programming,

Can cars be programmed to be better drivers than humans? Not better than any human, but they can be programmed to be better than the worst human drivers. For example, they will never be distracted.

So far, I’ve been asking questions about the skill of automated cars versus humans. Skills can be programmed. The real question we should be asking is not about skills but judgment.

Can automated cars make decisions as well as humans? Can the designers of these vehicles anticipate every possible situation the car might encounter? What about life or death decisions?

I’m not saying humans don’t make mistakes. Tens of thousands of drivers still choose to drive impaired. Even more can’t ignore phone calls or texts. And texting is eight times more dangerous than driving drunk.

Automated cars won’t make those mistakes. The problem is, until we have years of experience and millions of miles with these cars, we won’t know the mistakes they might make.

Like drivers, programmers are not perfect. Unlike drivers, programmers can’t react to situations. They must anticipate them, instructing the machine accordingly. Can they foresee everything?

We encounter faulty programming everyday on our devices. (If you don’t, you’re not paying attention.) Programming a car to move safely in traffic is far more difficult than programming a stationary device.

Learning to drive doesn’t end with getting a license. Experience is what tells you someone will turn even if they don’t signal. Or that they won’t turn if a signal’s been on for blocks. How much experience will the programmers have?

The Humanoids Are Coming


There are three major needs for humanoid robots. In order of likely implementation, the they are companionship, representation, and embodiment. The first may seem obvious, but the other two require considerable elaboration.

Companionship (and beyond) is already being marketed for robots that physically resemble humans. However, a companion that too closely resembles a human could create legal complications, e.g., marriage.

The owner of a companion robot wants to experience the human resemblance. To anyone else, the companion must be perceived as a humanoid robot. What will be the technological solution?

Humanoid robots as representatives are different. These do not simply function as servants, but rather as agents for their owners. Again, such devices are already on the market, e.g., the Double Telepresence Robot

While far from humanoid (it is little more than wheels and a vertical post carrying an iPad), the Double demonstrates the minimum necessary package to function as humanoid. Although limited to what the iPad can do, the Double can take your place at meetings, conferences, and similar gatherings.

The third category is far more problematical, both in implementation and actual potential. Embodiment means the humanoid robot embodies a person’s downloaded consciousness.

The uploading of consciousness may be a goal for some, but the methods to achieve it are still too vague to be assigned a probability. However, if it could be accomplished why not an occasional downloaded embodiment?

But how close to human form does it need to be? The embodied consciousness may want an exact duplicate of his or her former body. What is its legal status? Is it robot or artificial human? Does it have the rights of the embodied?

We are more comfortable attributing human characteristics to non-humans than dealing with things that may or may not be human. Both psychologically and legally, we need to know what’s human—and what’s not.

If we can’t prevent the robot makers from making robots that pass for human, why not pass laws requiring every robot to have a transponder (like aircraft) that identifies them as a robot? Of course, we’ll need to have an app to detect them.

Why Humanoid Robots?


When robots appear as personal servants, what form (or forms) will they take? We already have many robotic servants—but not in human form. However, eventually human-like robots are inevitable.

Some robotic tasks are easily acceptable when performed by a non-humanoid robot, e.g., the Roomba. (It makes much more sense than a humanoid maid pushing a vacuum cleaner.)

Similarly, a robot lawnmower need not be humanoid, but a robot dog walker might. The future form of robotic servants will be mixed according to buyer’s preferences. Servants (and by extension, slaves) will be more easily commanded if they have human form.

The word anthropomorphize has been in use some 170 years, says the Merriam-Webster website. It also says the word means “to attribute human form or personality to things not human.” It predates the Greeks, who probably used a different word.

Anthropomorphize is what we do, what we’ve always done, without any need to give it a label. It’s part of our nature to attribute aspects of that nature to non-human objects and beings.

Not only do we grant them life and will, we give them personalities. We go so far as to attribute sexual orientation to many objects, e.g., ships are female. Early autos were called stubborn.

Beyond our proclivity to anthropomorphize, as Freud elaborated, we project our feelings, beliefs, and assumptions onto others. Different from anthropomorphism, projection can be subtle and more common.

In the mid-sixties, a computer program named ELZIA was written by Joseph Weizembaum to study natural language processing. It simulated the basic responses of a therapist.

The degree to which people, even people who knew it was a computer, immersed themselves in this interaction was astounding, and more than a little disturbing to its author.

In this very crude imitation of a therapy session, people not only projected a therapist’s insight onto the program, they told it incredibly personal details of their deepest secrets.

If we are comfortable treating objects as though they were human, why not give them human form? The real question, is how human? Can we risk robots capable of impersonating humans?

Wither AI?


On the spectrum of smarts, what the pretenders to AI aspire to is cleverness. Why? Because it’s what they know, what they do, and therefore what they value (and think other people should also).

What they dismiss (because they can’t do, and therefore don’t value) are common sense and wisdom. The former is far more valuable than clever in our daily lives, and the latter invaluable for our future—as individuals and as a species.

Another reason common sense and wisdom are not valued is because they can’t be measured like IQ. To speak of smarts can only mean IQ—which is mere cleverness. We’d be better off with common sense or wisdom, both harder to attain.

The advocates (and acolytes) of AI, not only think super-clever will solve our all-too-human problems, they think it can solve them without our supervision. Not only wrong, but stupid.

By way of proof, I offer one man: John Von Neumann (1903-1957). JVN excelled in at least four areas: physics, mathematics, computing, and economics. In any one of these, his work achieved not only fame but proved him to be a unique genius.

As to which of the four was his greatest contribution, it’s hard to say but right now computing may be in the lead. Of course, that’s placing it above the development of the atomic bomb.

One of his lesser known books is Theory of self-reproducing automata. Machines making machines. It’s said its ideas led to the concepts of DNA (maybe that’s his most influential work.)

However, none of these are why I invoke him. Combine what JVN knew about computers (and their future), the brain (and AI), decision-making (Game Theory), and self-reproducing automata, and you’ll envision a dystopia worse than the Terminator’s.

Yet, he didn’t. Combine them, that is. He never saw AI as making decisions for us. Not at all. Here’s what he thought:

“. . . the best we can do is to divide all processes into those things which can be done better by machines and those which can be done better by humans and then invent methods to pursue the two.”

This I submit goes far beyond smart. More than clever, it’s actual wisdom. And I have to ask, why have we ignored it all this time? Why do we still listen to the pie-in-the-singularity-sky prophets?

Oh yeah, JVN also coined the term “singularity.” In his short life, he knew more than all these so-called smart guys combined. If we look to them for answers, then it is we who are unwise.

Post Navigation