Digital Minefield

Why The Machines Are Winning

Archive for the tag “computers”

The Software Did It!


Up to now, computers have been blamed only for unintentional failures. Crashes have led the list of computer malfunctions, signified by, “the computer is down.” But these too were unplanned, accidental events.

Now, after years of investigation in many countries, Volkswagen has finally come clean about their dirty diesels. More than merely admitting guilt, they pointed their corporate finger at the actual culprit: the software did it.

Not only acknowledging that the behavior was not accidental but intentionally criminal, they went public, throwing the offending software under the worldwide VW bus.

In one sense, the biggest aspect of this story was the further confirmation that the biggest companies feel compelled to cheat. Are they so insecure, or do they think they’re too big to be caught?

The most surprising aspect to the story for most people is learning just how sophisticated software can be when you want it to cheat. These engines ran clean only when they were being tested for clean emissions.

The strangest aspect to the story is what made them think they could get away with it? As long as testing was required, the software would cheat. Did they not think that eventually it would be found out?

Or did they think they were just buying time? That they could eventually find a fix—in hardware or software—that would produce clean engines? And then replace the cheating software?

The most disturbing aspect to the story is realizing that software is sophisticated beyond our imagination. What are the odds that such cheating can be detected, much less brought to justice?

The most obvious aspect to the story is why do they test in such a way that programs can cheat? Is there no method equivalent to the random drug test for humans? Which brings up another set of questions.

When will we see nano-bots doing that drug testing? Then, how long before someone creates software to cheat on its programming? And the obvious final question, how do we test the testers and their tools, i.e., their testing software?

Advertisements

Robo-Management


Once upon a time (preemptive pun), there was a genius named Frederick Winslow Taylor. Equipped with a clipboard and stopwatch, he revolutionized office and manufacturing procedures in the early part of the last century. (The Principles of Scientific Management, 1911.)

I learned this as a young teen by reading Cheaper By The Dozen. The book was about applying time-study methods to life with the twelve children of the husband and wife efficiency team of Frank and Lillian Gilbreth. (The 1950 movie starred Clifton Webb; the remake in 2003 starred Steve Martin.)

Fifteen years later, I learned of another goal, effectiveness, from the top management guru, Peter Drucker. Taylor preached efficiency, but effectiveness was more important. Yet, many organizations prefer efficiency over effectiveness.

In Taylor’s day, efficiency was symbolized by the stopwatch. Today’s efficiency is a quantity that can be measured more accurately by computers. Effectiveness is a quality determined by humans making value judgments.

Efficiency is easy to measure; it’s what is happening now. It’s harder to measure tomorrow’s consequences of today’s actions. Effectiveness is about judging consequences. It requires humans to make those judgments. Efficiency can be reduced to numbers churned out by computers.

Computer numbers are easy to acquire, super-fast to calculate, and can be analyzed a million different ways. The human judgments necessary for effectiveness are hard to acquire, slow to evaluate, and difficult to analyze.

In discussing the woes of modern workers, two companies are manipulative in the extreme: Walmart and Amazon. Their success is built on the diminution of human margins.

Is it any wonder that companies like these are using the computer as a modern stopwatch? In the name of efficiency, they’re pushing their workers to act like machines. To what end?

Using Taylor’s Scientific Management, companies are reshaping human jobs to better fit the robot workers of tomorrow. You could say the jobs are being tailored to suit the robots. (Begin with a pun; end with a pun.)

Worst Idea Ever


I assume by now you’ve heard about the ban on AI weapons proposed in a letter signed by over 1000 AI experts, including Elon Musk, Steve Wozniak, and Stephen Hawking. The letter was presented last week at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The rest of us have been occupied with threats of various kinds of autonomous vehicles (cars, trucks, drones), and we failed to see what was right around the corner. Autonomous weapons are more than battlefield robot-soldiers (unlikely) or gun-toting drones (likely). And weapons are more than guns.

Robots have no qualms about sacrificing themselves. It’s a different kind of warfare when the weapons deliverer is the weapon. It’s kamikaze on steroids.

Only now do I realize, after writing a dozen posts about autonomous autos, that they and their ilk are merely stalking horses for the worst idea ever. Autos were just the beginning. Add long haul trucks, obviously, because they’re already being tested.

How about local trucks? Post Office? UPS? How about city buses? School buses? Don’t forget Amazon’s drones. Can larger autonomous drones replace helicopters? For news? Police surveillance? Medivacs?

Look at it another way. Instead of using robots to replace humans, autonomous vehicles are replacing the entire job, e.g., driving a truck. The idea behind all these autonomous devices is to get the public used to the concept, so they won’t question any of them, even those that go way over the top.

Aside from giving the people in control more control using the leverage of computers, there’s the general degradation of the populace by making them less valued than the robots that replace them.

How did humans come to this insane position? Here’s how. People who control the machines think not only are they so much smarter than other people (e.g., the ones they want to replace with robots), they think they can make computers smarter than other people. This is the AI they seek.

And there are some so enamored of intelligence in any form that if they succeed at making a superhuman artificial intelligence—one even smarter than themselves—they will bow down and worship it. Even as it destroys them.

Auto Autonomous, Part Two


Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Clouds and Grasshoppers


Last week’s post tried to shed some light on how much we don’t know about Clouds. It was a revelation for many and a shock for some. Just yesterday, one friend asked, “What’s new?”

So I showed him what I had seen just the day before: the CuBox-i, a two-inch cube PC computer that runs Android and Linux. Starting at $45, you can add all the way up to a keyboard and monitor to get a full desktop computer.

Two things. At 8 cubic inches, this is not the smallest computer out there. Many are not much bigger than a flash drive. Also, this cube is not that new; it’s the second generation of the device.

I’m sure you’re aware of the computers inside your tablets and smart phones. These smaller computers actually began with netbooks (I still have mine). Well, the computers in those devices have become—surprise!—smaller and more powerful.

I was only vaguely aware of this trend and didn’t discover the extent of it until last week. One reason is no one is really sure what to call these little demons. Many say mini PC, but how is a mini PC smaller than the original micro-computer PC?

Some say tiny, because that’s more descriptive. However, without a common label how and where do you go to learn about them? One thing is for sure. You won’t find them in the big box stores.

Computer magazines at newsstands used to be a good source for new technology, both announced and advertised. No more. How many newsstands can you find? How many computer magazines?

Like everything else, it’s all online. If you can find it. (For these newer, littler guys you might try Laptop Magazine.) To help, I’ve decided to call them grasshoppers. Why? Because the first one I saw actually reminded me of a grasshopper. In Florida, I’ve seen them this big. So why not?

The real, more serious question is what are people doing with them? The phrase I keep seeing is “TV Box.” As to exactly what that is, I can only guess. Something to do with streaming media, I suppose.

That capability is the “why” of this post. At 32Gb of storage these grasshoppers will be using the Cloud. Sure you can hang a terabyte of storage on its USB connector but you’re quadrupling its bulk.

The Cloud may be selling storage and remote computing but it’s the perfect source for streaming to millions of grasshoppers. Of course, a virus might organize all these devices to stream at once, sucking The Cloud dry like a plague of locusts.

Street Smarts?


What is smart? Does the automated car they tell us is almost here qualify as smart? It’s pretty smart if it can steer itself and avoid obstacles. It’s very smart if it can recognize lane markings and traffic lights. How about reading street signs?

We know cars are smart enough to park themselves. What about NYC’s famed alternate side of the street parking? How smart does this car have to be for you to trust it with your life? The lives of your loved ones?

Living creatures are smart because they adapt to changes in their circumstances, e.g., the three-legged dog. Computers (and other machines) cannot. They are limited to their programming,

Can cars be programmed to be better drivers than humans? Not better than any human, but they can be programmed to be better than the worst human drivers. For example, they will never be distracted.

So far, I’ve been asking questions about the skill of automated cars versus humans. Skills can be programmed. The real question we should be asking is not about skills but judgment.

Can automated cars make decisions as well as humans? Can the designers of these vehicles anticipate every possible situation the car might encounter? What about life or death decisions?

I’m not saying humans don’t make mistakes. Tens of thousands of drivers still choose to drive impaired. Even more can’t ignore phone calls or texts. And texting is eight times more dangerous than driving drunk.

Automated cars won’t make those mistakes. The problem is, until we have years of experience and millions of miles with these cars, we won’t know the mistakes they might make.

Like drivers, programmers are not perfect. Unlike drivers, programmers can’t react to situations. They must anticipate them, instructing the machine accordingly. Can they foresee everything?

We encounter faulty programming everyday on our devices. (If you don’t, you’re not paying attention.) Programming a car to move safely in traffic is far more difficult than programming a stationary device.

Learning to drive doesn’t end with getting a license. Experience is what tells you someone will turn even if they don’t signal. Or that they won’t turn if a signal’s been on for blocks. How much experience will the programmers have?

Enchanted Objects


In David Rose’s Enchanted Objects, he posits four technological futures. I wrote about the first of these, Terminal World, a few weeks ago. He says it’s about “glass slabs and painted pixels.”

The second of these futures is Prosthetics, where we transform into our “Superhuman selves.” The third is Animism, a world filled with “swarms of robots.” (See last week’s post.)

Finally, he offers Enchanted Objects, a world where “ordinary objects are made extraordinary.” Not surprisingly, Rose is a big deal at MIT’s famed Media Lab and is immersed in the latest technological gadgets. Obviously, this is his preferred future.

The book is subtitled, “Design, Human Desire, and the Internet of Things,” but the last is its true focus. Things, says Rose and many others looking to shape our technological future, will be connected via the Internet to other things and especially to our computers, tablets, and smart phones.

And I’m sure they will be. As to whether this will be the dominant technology of the future, I have my doubts. Although the author favors the term “enchanted” to describe these, I’m sure we could all agree these are enhanced objects.

Like any added feature to any product, only the market can judge its success or failure. The key question for Rose’s preferred future is, will people pay the additional cost?

No matter how much a feature or set of features adds to a product, will enough people buy it if there’s a comparable product with less features for less money? In other words, enhancement is a luxury, not a necessity.

If you press Apple buyers, they will say its products are enchanted. Apple’s last quarter was the most most profitable of any company. Ever. More than half of that profit came from one product (iPhone) in one country (China).

This success has more to do with Apple’s image and marketing (and Chinese culture) than the iPhone’s features and price, which are comparable to other smart phones. Buyers may have desired enchantment, but didn’t have to pay more.

While Rose has a vested interest in a future filled with enchanted objects, others are invested in each of the other alternatives he presents. The inevitable result will be a mixture of all four.

It’s easy to see the trend to glass slabs. The future of prosthetics is less clear, as is that of robots . Even less obvious is how they all will join The Internet of Things. Some things may succeed as Enchanted Objects, but I don’t think they’ll dominate.

Who Rules Reality?


The more we live out lives virtually, the less control we have over reality. This idea is not new. It is at least as old as the short story “The Machine Stops” by E. M. Forster—written in 1909!

I could rephrase the thought by substituting the word “conveniently” for “virtually.” Computers provide the convenience and do so more powerfully, and less expensively, through the virtual representation of reality.

Convenience is what we desire; virtual is just the means for achieving it. Convenience, with all kinds of promises of pleasure and power, is what the makers of glass slabs are selling.

Convenience comes in other forms, for example automated cars. These will take you from A to B and do all the work. What’s more convenient? Obviously, simply not going from A to B. That is, being able to visit B virtually, without ever leaving A.

Will virtual visits beat out automated cars? Who knows? There’s lots of money to be made selling new cars and far too many people still think cars are personal magic carpets.

On the other hand, no one has done a really good job of providing an enhanced virtual shopping experience. The software is much easier than automating cars. But who’s the client? Not malls.

Why would any chain of department stores want to obsolete its brick and mortar investment? At least not until someone figures out how to synergize virtual and real shopping. Until then, look to Amazon’s competitors to offer better virtual shopping.

If that seems unlikely, think of all the specialty stores and boutiques that could expand their potential customers by offering a more realistic virtual shopping experience. Would these combine into virtual malls?

Regardless of how much of our lives will be lived virtually, one aspect of providing that virtual access will always be real and never virtual. In word: infrastructure. This is the real world component of whatever miracles computers produce.

Whether roads for automated cars or Internet carriers for virtual experience, infrastructure must be built, maintained, and upgraded to produce real world results. Think delivery of Amazon packages.

Yet, the details of infrastructure are invisible to those of us tethered to our glass slabs. We may have convenience to the nth degree, but we don’t know who is behind the curtain, controlling the real world infrastructure that makes it all possible.

Infrastructure, being real, costs real dollars. Those costs get passed on to users of the infrastructure, to us. The rulers of reality set the prices, get government to build infrastructure, and collect from us directly and through taxation.

Man Versus Machine


Before I knew anything about computers, I knew a lot about chess. Never a good player, I enjoyed the mental challenge of the game and especially its history and personalities. Like most things I learned as a teenager, I learned chess from books.

Like many chess players I read about, I also had my quirks. Unlike them, I cared less about winning than exploring possibilities. Because of this, I preferred offense to defense: a good way to lose. I thought end games dull, meaning more loses.

Despite these flaws, I enjoyed playing. This changed when I got my first computer chess game, Sargon, in the late 70s. It was a good program and a decent chess player (for my limited skills).

I eventually lost interest because I wanted to play at the end of the day to relax. Naturally, I made mistakes because I was tired. It became annoying because the program never made a mistake.

Psychologically, you may know you can make better moves than your machine opponent but you also know its mechanistic approach will never lapse—even in the slightest. After a while, you realize the only way to win is to be a better machine.

A phrase often heard in many sports says, winning is the ability of one side to impose its will on the other. No team, no person does this without extreme emotional commitment. Playing a machine is never more than practice. Where is the emotion?
`
Sports are shared experiences, which can be life-changing. Computers only acquire data. Competition means nothing to a computer. It doesn’t actually play chess, it simulates playing.

It doesn’t care about winning. It doesn’t even know what it means to win; that’s just what it’s programmed to do. Most certainly, it doesn’t fear losing. It doesn’t need any motivation.

I could go on and on about what computers, even world-class chess playing computers, can’t do. Most importantly, they will never truly know chess. No computer will ever enjoy learning about chess as I did, even with my limited abilities.

No computer can ever do more than the imagination of its programmers allow it to do. A few may be superior to humans at specific tasks, but would they want to leave a burning building?

Division of Labor Lost


“. . . the best we can do is to divide all processes into those things which can be done better by machines and those which can be done better by humans and then invent methods to pursue the two.” —John von Neumann

Instead of asking how to divide labor, businesses are misusing computers by trying to computerize everything. They seem to believe that everything that can be computerized, should be.

They assume, regardless of the initial cost, in the long run it will be cheaper for a machine to do it than a human (no training, benefits, pensions, etc.). Not to mention a 24 hour workday.

Look at cars. Much of the production and assembly is now done by programmable robots. Computers assist in every facet from design to engineering, accounting, and sales. But computers don’t actually design, engineer, account, or sell. Humans do.

Cars are very high-tech devices, requiring very precise manufacturing. Contrast that to logging: felling trees, trimming them, and transporting the resulting logs to the sawmill. This work required many people, called lumberjacks. Not any more.

Check out the logging robot on YouTube. It does the work of dozen loggers. And it does more: works on steep terrains and it doesn’t need access roads. The robot logger is very impressive.

The industry could follow von Neumann’s advice, using this device only where it’s far superior to men. I don’t see that happening. What I see are laid-off loggers in a bar, ironically singing Monty Python’s “I’m a lumberjack, and I’m OK.”

One robot on an assembly line replaces many human workers. Humans make and run those robots, but the number is only a small fraction of the number of workers replaced by the robots.

This is more efficient for the company, but how will the displaced workers make enough to buy those robot-built cars? Robots make manufacturing more efficient, in part by eliminating jobs—but that also eliminates consumers.

If there aren’t enough consumers able to afford the products made by the robots, won’t the robots be out of work, too? How about companies that bought the robots? How will they survive?

Post Navigation