Digital Minefield

Why The Machines Are Winning

Archive for the category “Triumph of the Machine”

Oh, The Humanities!


In 1959, British scientist and novelist C. P. Snow gave a lecture titled “The Two Cultures.” He said British education was split into Science and the Humanities—and that the latter saw itself as superior.

His critique claimed the essentials of the Humanities were the central pillar of British education, and that most educated Brits thought the essentials of Science as little more than incidental.

The lecture became a widely read book, then a widely discussed controversy, and even another follow-up book. In less than sixty years, it is not only forgotten but the tables have completely turned.

Today not only is Science king, very few people (even outside of Science or Technology) see any value whatsoever in the Humanities. As I said in my post of July 20, 2015, “The nerds have won.”

However, having everything your own way is rarely the path to victory. I could just mention the name Midas, or point to the endless stories and fables meant to teach us a little wisdom.

I could give endless examples of how our technology could be improved by adding a human element. There are many related to programming in this blog. However, the big picture are the robots that will be built on the assumptions of artificial intelligence.

The intelligence sought by AI is abstract. AI scientists don’t see the distinct value of human intelligence. They think that somehow a machine can make decisions or solve problems without a concept of self, without consciousness—without empathy for humans.

Empathy is exactly what our current technology lacks. It can be learned directly from experience or indirectly from education. But it can only be learned, directly or indirectly, from humans.

Intelligence without empathy is merely data. How many times have you heard the phrase “thinking outside the box”? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Using imagination is box-free thinking.

Wikipedia defines “[I]ntelligence … [as] logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.” Yet, without imagination, all of these are useless.

Imagining how humans will respond is necessary for human-friendly technology. If we can apply our humanity, we can empathize how people will feel using a technological product or device. We can, if our science is balanced by our humanity.

Advertisements

Robo-Management


Once upon a time (preemptive pun), there was a genius named Frederick Winslow Taylor. Equipped with a clipboard and stopwatch, he revolutionized office and manufacturing procedures in the early part of the last century. (The Principles of Scientific Management, 1911.)

I learned this as a young teen by reading Cheaper By The Dozen. The book was about applying time-study methods to life with the twelve children of the husband and wife efficiency team of Frank and Lillian Gilbreth. (The 1950 movie starred Clifton Webb; the remake in 2003 starred Steve Martin.)

Fifteen years later, I learned of another goal, effectiveness, from the top management guru, Peter Drucker. Taylor preached efficiency, but effectiveness was more important. Yet, many organizations prefer efficiency over effectiveness.

In Taylor’s day, efficiency was symbolized by the stopwatch. Today’s efficiency is a quantity that can be measured more accurately by computers. Effectiveness is a quality determined by humans making value judgments.

Efficiency is easy to measure; it’s what is happening now. It’s harder to measure tomorrow’s consequences of today’s actions. Effectiveness is about judging consequences. It requires humans to make those judgments. Efficiency can be reduced to numbers churned out by computers.

Computer numbers are easy to acquire, super-fast to calculate, and can be analyzed a million different ways. The human judgments necessary for effectiveness are hard to acquire, slow to evaluate, and difficult to analyze.

In discussing the woes of modern workers, two companies are manipulative in the extreme: Walmart and Amazon. Their success is built on the diminution of human margins.

Is it any wonder that companies like these are using the computer as a modern stopwatch? In the name of efficiency, they’re pushing their workers to act like machines. To what end?

Using Taylor’s Scientific Management, companies are reshaping human jobs to better fit the robot workers of tomorrow. You could say the jobs are being tailored to suit the robots. (Begin with a pun; end with a pun.)

Social Media’s Biggest Lie


This time it was the news that made the news. This time, instead of hearing about the killer’s social media from an investigation, we heard it in real time from the killer himself. He made social media the very essence of his crime.

I wondered how a psychopath had a social media network. Then it all came back to me. News reports of senseless killings over many decades. And how, until now in the age of social media, all those killers were described as “loners.”

Maybe it began with Columbine (April 20, 1999). Although the influence of social media wasn’t as obvious there because it was a shared psychosis and seen as an extreme folie à deux. Maybe, but they were loners.

However, since Columbine, the extensive usage of social media has been the common element the news has given us in lieu of the more cryptic term, loner. Yet, for all this data we have learned nothing about how these disturbed people became out-and-out psychopaths.

Instead, we are left with a pile of meaningless social media connections. As though there was some understanding of the actions of these psychopaths that could be gained by exploring their social media movements.

Far too many people seem unaware that we become human only though interaction with other humans. This interaction is not only what makes us human, it’s what keeps us human.

It would also seem that most people are unable to distinguish the unreality of social media’s virtual interactions from actual face-to-face, one-on-one human interaction. The news media acts as though social media gives loners real connections.

What nonsense! It’s their actions, not their social media connections that identifies people as loners. It’s their lack of real human interactions that labels them. But what is real for such disturbed people?

They each have their own reality. The rest of the world calls it virtual but that has no effect since the disturbed think it’s real—just as they believe their grievances justify the use of weapons.

Social media is “… the illusion of companionship without the demands of friendship.” —Sherry Turkle, Alone Together: Why we experience more from technology and less from each other.

Disturbed people without real friends are as likely to harm themselves as others. Using social media to deceive ourselves into thinking virtual is actual human contact will end in disaster.

We could avoid some future disasters if we remove the possibility of interrupting live broadcasts. This seven-second or profanity delay has been available for decades.

The NRA’s big lie is anyone can own a gun without any need for proper training. That is the same as saying any idiot can use one, which turns out to be true. Using it correctly is another story.

Social media’s biggest lie is that virtual friends can help with real problems. Guns don’t solve personal problems, people do. That is, real people, not virtual people.

Technology’s Fatal Flaw


Two ideas came to me last week and I struggled with them until I realized they were both the same idea, just expressed differently. For this post in Digital Minefield, it is expressed as “Technology’s Fatal Flaw.” For my post in Pelf and Weal, it is called “Gambler’s Paradise.”

Daily in the news we hear of technological failures. Only the problems are attributed to separate specific sources, like drones and abandoned mines. No one sees the risk-taking of technology as the common element.

It’s easier to blame technology, that is new technology, for society’s inability to control drones. It’s not so easy to see that exactly the same moral approach has lead to a quarter of a million abandoned mines here in the US.

Where do we draw the line between scientific experimentation and technological innovation? In the eighteenth century, chemists rushed to discover new elements. Often they did so by smelling the results of chemical reactions. It killed some of them.

Many, however, got rich. In England, the best were made peers of the realm. Most were not simply chemists but also inventors, lecturers, and famous authors. We remember the successful ones and forget the risks they took.

No one has forgotten the risks taken by Marie Curie. The radioactivity she discovered—and that killed her—made her famous in her lifetime (two Nobel prizes). We forget such risk-taking was the norm.

Most of the risk-taking in the days of get-rich-quick mining centered around success or failure. Less discussed were the actual physical dangers. Never mentioned were the costs to posterity.

This was true for the precursors of the chemists, the alchemists, so it remains true for their modern day atomic wizards. Society has committed to the risk of nuclear reactors without any viable solution for its extraordinarily dangerous waste product, plutonium—deadly for 25,000 years.

It is obvious that any new technology (and science) has always been ahead of laws to regulate it. By definition, if it’s really new, how could there be laws in place to deal with it? We have no answer, because we are technology’s fatal flaw.

Who’s In Control?


I’ve written a lot lately about autonomous vehicles, weapons, etc. In the news right now are remote-controlled drones interfering with California fire fighters. What’s the connection? Whether you’re on the annoying end of self-driving cars or a human-driven drones, it’s all the same.

What’s my point? When it comes to laws prohibiting or regulating their actions, devices must be treated based on their actions and capabilities. The state of their autonomy has nothing to do with the case.

This is also true when it comes to finding the culprit. If a device breaks the law (or a regulation) then the responsible party must pay the penalty. If the device is autonomous, it will be hard to determine who sent it on its independent mission.

In other words, before we can have any kind of autonomous device, we need enforceable laws to connect the intent of the person controlling the device to its actions. As you might imagine, this will be more difficult than identifying a person with a drone.

Wait a minute! Can we even do that? If a drone can be connected to a controller device—and then to the person using that device—then why are California fire fighters having these problems?

It seems implausible the drone controller could possibly control more than one drone. However, instead of a unique identifier between each drone and its controller, suppose the manufacturer uses only a hundred unique identifiers for the thousands of drones it makes. Or maybe only a dozen.

In as much as the drone buyers do not have to register the identifier (nor is there a law requiring sellers to keep such records), the only way an errant drone could be prosecuted would be to get its identifier and find its controller.

The latter task requires searching an area whose radius is the maximum control distance for this model. Assuming the drone owner is stupid enough to keep the controller after the drone didn’t come back. Assuming the drone owner was operating from a house and not a car.

Without a complete registry of totally unique drone and controller ids, these devices are free to fly wherever the owner wants. Unlike a gun that identifies the bullets it shoots, a drone controller can’t be traced.

These rabbits have clearly left Pandora’s hat. Short of declaring all existing drones illegal (i.e., no totally unique identifier), there is no way for society to control the use of these devices.

However, we have the opportunity to pass such laws for autonomous devices not yet on the market. The real question is: Does society have the will? I doubt it, since it’s not too late to redo the drones and I see no inclination to do so.

Who would have thought that a technology as innocuous as toy drones could expand into total social chaos? As for banning of autonomous weapons, the military will resist ids. And I can see the NRA in the wings salivating at the chance to put in its two cents.

Auto Autonomous, Part Three


Everyone refers to them as autonomous vehicles. Everyone is wrong. Why? Very simply, they are not autonomous. They are no more autonomous than iRobot’s Roomba vacuum cleaner.

Not everyone has a Roomba, but everyone knows it’s not autonomous. It’s a robot. The company’s website describes it as such, never using the word “autonomous.” What’s the difference?

A robot, says the Merriam-Webster dictionary, works automatically. Another good word might be “automaton,” something that acts as if by its own power. That’s a long way from being truly autonomous.

Actually, in the dictionary, it’s just two definitions away. In between is “automotive.” Then we have “autonomous,” defined as having the power or right to govern itself. Self? What self? These so-called autonomous cars have no more self then a Roomba does.

To emphasize my point, the word “autonomous” comes from the Greek “autonomos”meaning to have its own laws. Whose own? There’s no “who” here, it’s a machine. It’s a “what,” not a “who.”

Unlike that dramatic moment in the original Frankenstein film, no one will cry out, “It’s alive!” when the key is turned. The tissue, the hardware, will remain what it always was—dead.

Obviously, the solution has to be in the software. So, why does AI’s approach to intelligence not follow the only example we have, our own? Why does AI believe in a mythical “pure” intelligence, divorced from body, from emotion, from consciousness, from self?

An individual only becomes human (and intelligent) through the medium of other humans. However, AI prefers intelligence in isolation, as a philosophical ideal. No wonder they keep failing.

One thing for sure, saying these cars are autonomous makes them sound smarter than they really are. Do the promoters want to deceive themselves or us? Either way, they’re not that smart.

Since many really big companies are determined to roll out autonomous cars, I’m sure they will appear in many different forms. Where they’re likely to succeed is as taxicabs in cities.

I can see people using these regularly and still be unwilling to buy one. Unwilling or unable. While it may seem logical to the car makers that cars made by robots should be driven by robots, who’s left with a job to buy the cars?

Auto Autonomous, Part Two


Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Auto Autonomous, Part One

Strange week. All kinds of items related to autonomous machines appeared from many different sources. Some were cars, some were trucks, and some were even weapons. Along with stories about super-intelligent computers, it was a chilling week.

First was a tiny link in my AAA magazine about the history of autonomous vehicles. For example, “1939: GM’s World’s Fair exhibit predicts driverless cars will be traveling along automated highways by 1960.”

The link also had this entry, “2035: By this date, experts predict 75 percent of cars on roadways will be autonomous.” Near by in the magazine was an article on the latest muscle car. Wonder how those will get along with autonomous cars.

On PBS this week, I learned about autonomous trucks and weapons (two separate stories). Driver-less semis are scary enough, without thinking about weapons deciding who’s a target.

I apologize if this is too much information, but I have more. In a word: taxicabs. Autonomous vehicles that will pick you up and deliver you to your destination. Didn’t we see that in the first Total Recall movie? After hearing about trucks and weapons, sounds very reasonable, doesn’t it?

What’s not reasonable is the talk about super-intelligent machines. It’s not coming from the people who want you to be passive passengers. No, it’s coming from those who can’t wait to worship the machine.

This attitude is rarely found among those studying artificial intelligence (AI) or those who are working to implement it. Rather, it comes from philosophers, pundits, and self-proclaimed futurists who know a little about AI and less about computers.

Led by Ray Kurzweil of Singularity fame, these predictions are based on a single insight known as Moore’s Law. It says the number of transistors on a chip (integrated circuit) doubles every two years. Ray, et al, claim this means computers are becoming exponentially more powerful.

They fail to comprehend the Law only applies to the hardware side of computers. Software is another kettle of badly-cooked fish. No one is foolish enough to suggest software is similarly improving.

Don’t take my word for the state of AI. Listen to an actual AI expert. Here’s the TED talk of Dr. Fei-Fei Li — Director of the Stanford Artificial Intelligence Lab.

Smart Streets?


Last week’s post asked how smart were these automated cars being hailed as saviors of our highways. I asked many questions, all presuming these cars were autonomous—because that’s how they’re being promoted.

Well, they’re not. Basically, they’re mobile computers and no computer these days is independent of the Internet, or if you prefer, The Cloud. Even your stationary desktop computer gets constant updates from its various hardware and software makers.

Any automated car will be no different and therein lies a whole new set of questions. To what degree are they independent and to what degree are they connected to (controlled by) The Cloud?

Aside from the usual updates for its hardware and software, an automated car needs current information about the streets it’s navigating, not to mention its destination. (Hence the title.)

These cars need The Cloud for updates about traffic, road conditions, and even the roads themselves. It might be possible to load all the roads into the car’s computer, but is it likely?

Point being, there are continual updates to the whole system of roads, but only rarely to your localized region of actual driving. Updating a car with information on all roads is wasteful, and it could be dangerous.

How to update what data will determine the dependency of vehicles on The Cloud and therefore the Internet. If connections go down—even for a minute—it doesn’t mean one car is on its own. Rather all cars in this vicinity using the same connection will be left on their own. This gives us new questions.

Can these automated vehicles be sufficiently autonomous if they lose their Internet connection? Think fail-safe. And don’t assume that simply stopping (or even pulling over to the side of the road) will always be the right option.

The makers who propose these vehicles are big on showing us how these cars avoid obstacles. But the real value of automated cars is controlled traffic flow. That takes coordination, which raises a new set of questions.

There’s the problem of autos from different manufacturers. Or will the government step in and choose a single supplier, or at the very least a single computer system to be used by all?

If there are different manufacturers, will they use the same data? Supplied by whom? (Is all this just a power play by Google?) If they do use the same data, will they all update at the same time?

The more I look at this, the more questions I have. My biggest question is: Are the people selling this concept and those who will have to approve it asking the same questions?

Street Smarts?


What is smart? Does the automated car they tell us is almost here qualify as smart? It’s pretty smart if it can steer itself and avoid obstacles. It’s very smart if it can recognize lane markings and traffic lights. How about reading street signs?

We know cars are smart enough to park themselves. What about NYC’s famed alternate side of the street parking? How smart does this car have to be for you to trust it with your life? The lives of your loved ones?

Living creatures are smart because they adapt to changes in their circumstances, e.g., the three-legged dog. Computers (and other machines) cannot. They are limited to their programming,

Can cars be programmed to be better drivers than humans? Not better than any human, but they can be programmed to be better than the worst human drivers. For example, they will never be distracted.

So far, I’ve been asking questions about the skill of automated cars versus humans. Skills can be programmed. The real question we should be asking is not about skills but judgment.

Can automated cars make decisions as well as humans? Can the designers of these vehicles anticipate every possible situation the car might encounter? What about life or death decisions?

I’m not saying humans don’t make mistakes. Tens of thousands of drivers still choose to drive impaired. Even more can’t ignore phone calls or texts. And texting is eight times more dangerous than driving drunk.

Automated cars won’t make those mistakes. The problem is, until we have years of experience and millions of miles with these cars, we won’t know the mistakes they might make.

Like drivers, programmers are not perfect. Unlike drivers, programmers can’t react to situations. They must anticipate them, instructing the machine accordingly. Can they foresee everything?

We encounter faulty programming everyday on our devices. (If you don’t, you’re not paying attention.) Programming a car to move safely in traffic is far more difficult than programming a stationary device.

Learning to drive doesn’t end with getting a license. Experience is what tells you someone will turn even if they don’t signal. Or that they won’t turn if a signal’s been on for blocks. How much experience will the programmers have?

Post Navigation