Digital Minefield

Why The Machines Are Winning

Auto Autonomous, Part Three

Everyone refers to them as autonomous vehicles. Everyone is wrong. Why? Very simply, they are not autonomous. They are no more autonomous than iRobot’s Roomba vacuum cleaner.

Not everyone has a Roomba, but everyone knows it’s not autonomous. It’s a robot. The company’s website describes it as such, never using the word “autonomous.” What’s the difference?

A robot, says the Merriam-Webster dictionary, works automatically. Another good word might be “automaton,” something that acts as if by its own power. That’s a long way from being truly autonomous.

Actually, in the dictionary, it’s just two definitions away. In between is “automotive.” Then we have “autonomous,” defined as having the power or right to govern itself. Self? What self? These so-called autonomous cars have no more self then a Roomba does.

To emphasize my point, the word “autonomous” comes from the Greek “autonomos”meaning to have its own laws. Whose own? There’s no “who” here, it’s a machine. It’s a “what,” not a “who.”

Unlike that dramatic moment in the original Frankenstein film, no one will cry out, “It’s alive!” when the key is turned. The tissue, the hardware, will remain what it always was—dead.

Obviously, the solution has to be in the software. So, why does AI’s approach to intelligence not follow the only example we have, our own? Why does AI believe in a mythical “pure” intelligence, divorced from body, from emotion, from consciousness, from self?

An individual only becomes human (and intelligent) through the medium of other humans. However, AI prefers intelligence in isolation, as a philosophical ideal. No wonder they keep failing.

One thing for sure, saying these cars are autonomous makes them sound smarter than they really are. Do the promoters want to deceive themselves or us? Either way, they’re not that smart.

Since many really big companies are determined to roll out autonomous cars, I’m sure they will appear in many different forms. Where they’re likely to succeed is as taxicabs in cities.

I can see people using these regularly and still be unwilling to buy one. Unwilling or unable. While it may seem logical to the car makers that cars made by robots should be driven by robots, who’s left with a job to buy the cars?

Auto Autonomous, Part Two

Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Auto Autonomous, Part One

Strange week. All kinds of items related to autonomous machines appeared from many different sources. Some were cars, some were trucks, and some were even weapons. Along with stories about super-intelligent computers, it was a chilling week.

First was a tiny link in my AAA magazine about the history of autonomous vehicles. For example, “1939: GM’s World’s Fair exhibit predicts driverless cars will be traveling along automated highways by 1960.”

The link also had this entry, “2035: By this date, experts predict 75 percent of cars on roadways will be autonomous.” Near by in the magazine was an article on the latest muscle car. Wonder how those will get along with autonomous cars.

On PBS this week, I learned about autonomous trucks and weapons (two separate stories). Driver-less semis are scary enough, without thinking about weapons deciding who’s a target.

I apologize if this is too much information, but I have more. In a word: taxicabs. Autonomous vehicles that will pick you up and deliver you to your destination. Didn’t we see that in the first Total Recall movie? After hearing about trucks and weapons, sounds very reasonable, doesn’t it?

What’s not reasonable is the talk about super-intelligent machines. It’s not coming from the people who want you to be passive passengers. No, it’s coming from those who can’t wait to worship the machine.

This attitude is rarely found among those studying artificial intelligence (AI) or those who are working to implement it. Rather, it comes from philosophers, pundits, and self-proclaimed futurists who know a little about AI and less about computers.

Led by Ray Kurzweil of Singularity fame, these predictions are based on a single insight known as Moore’s Law. It says the number of transistors on a chip (integrated circuit) doubles every two years. Ray, et al, claim this means computers are becoming exponentially more powerful.

They fail to comprehend the Law only applies to the hardware side of computers. Software is another kettle of badly-cooked fish. No one is foolish enough to suggest software is similarly improving.

Don’t take my word for the state of AI. Listen to an actual AI expert. Here’s the TED talk of Dr. Fei-Fei Li — Director of the Stanford Artificial Intelligence Lab.

Clouds and Grasshoppers

Last week’s post tried to shed some light on how much we don’t know about Clouds. It was a revelation for many and a shock for some. Just yesterday, one friend asked, “What’s new?”

So I showed him what I had seen just the day before: the CuBox-i, a two-inch cube PC computer that runs Android and Linux. Starting at $45, you can add all the way up to a keyboard and monitor to get a full desktop computer.

Two things. At 8 cubic inches, this is not the smallest computer out there. Many are not much bigger than a flash drive. Also, this cube is not that new; it’s the second generation of the device.

I’m sure you’re aware of the computers inside your tablets and smart phones. These smaller computers actually began with netbooks (I still have mine). Well, the computers in those devices have become—surprise!—smaller and more powerful.

I was only vaguely aware of this trend and didn’t discover the extent of it until last week. One reason is no one is really sure what to call these little demons. Many say mini PC, but how is a mini PC smaller than the original micro-computer PC?

Some say tiny, because that’s more descriptive. However, without a common label how and where do you go to learn about them? One thing is for sure. You won’t find them in the big box stores.

Computer magazines at newsstands used to be a good source for new technology, both announced and advertised. No more. How many newsstands can you find? How many computer magazines?

Like everything else, it’s all online. If you can find it. (For these newer, littler guys you might try Laptop Magazine.) To help, I’ve decided to call them grasshoppers. Why? Because the first one I saw actually reminded me of a grasshopper. In Florida, I’ve seen them this big. So why not?

The real, more serious question is what are people doing with them? The phrase I keep seeing is “TV Box.” As to exactly what that is, I can only guess. Something to do with streaming media, I suppose.

That capability is the “why” of this post. At 32Gb of storage these grasshoppers will be using the Cloud. Sure you can hang a terabyte of storage on its USB connector but you’re quadrupling its bulk.

The Cloud may be selling storage and remote computing but it’s the perfect source for streaming to millions of grasshoppers. Of course, a virus might organize all these devices to stream at once, sucking The Cloud dry like a plague of locusts.

Life In The Cloud

Seeing the ads on TV (and every other ad-infested medium), you’d think companies like Microsoft want us to move all our computing to The Cloud. (Actually, to Their Cloud.) As if we weren’t already there.

If you spend more time streaming than downloading, you’re already living in The Cloud. If what you’re looking at on your desktop, laptop, smart phone, etc., isn’t stored on that device, then you’re already living in The Cloud.

If what you’re seeing comes from somewhere on the Internet, then you’ve bought into The Cloud. “Wait a Googley-minute,” I hear you objecting. “Where else would I find things?”

Well, once upon a time we created things ourselves and sent them to one another. That was before we had access to the Internet, and through the Internet to one another. Instead of individuals and institutions loosely connected, the Internet became another Big Medium dominated by Big Players.

“So what’s the big deal about The Cloud?” you ask. Think of it in terms of real estate. You know the mantra: location, location, location. In this case, the real estate is all that memory and storage sitting on all those personal devices we use. That’s the real estate we own.

Now look at the big Cloud players. A current list of the top 100 shows Microsoft at 23 and even the oh-so-huge Google only at number 12. While Amazon may be number one, most of the other names in this list are unrecognizable to me and you.

And they want you. OK, not so much you, as millions of you’s, to use their real estate for the things you want to do with your devices. Of course, in this game, you—even millions of you’s—are small potatoes. What the big Cloud players want is millions of organizations to put their data and computing into The Cloud.

Maybe, I’d better reword that. The big Cloud players want businesses, nonprofits, NGOs, and even governments to buy real estate in their Cloud. Amend that: buy or rent. Never ignore rentals.

Where does location come into this game? Isn’t it obvious? The more traffic coming in and out of any big Cloud player’s location, the more valuable the location. Or at least that’s the smoke they’re blowing.

As it is in real real estate, the big Cloud players don’t have to own what they’re peddling. They can be middle-men, wheeling and dealing, pushing their locations as valuable properties.

But the same questions about real real estate still apply. Does this location have sufficient infrastructure? Are the services reliable? Are you being locked in by the high cost of moving? Is the location secure? Will your stuff be safe?

Dot Beware

Earlier this week I allowed malicious software to run on my computer. It came as a well-disguised email attachment. It was abetted by my tablet email that did not show me the full address of the sender, which would have made me instantly suspicious.

The attachment claimed to be text compressed as a zip file. But the WinRAR program did not show the full, very long file name. Again, I’m sure another intended deception. Concealed by the long name was the file extension.

For those who don’t recall Windows 101, file names have two parts: the name and the extension. The latter identifies the type of file, e.g., .doc is MS Word, .wri is WordPad, and .txt is NotePad.

A file can be data or program—or both. This file I’m writing is text as data to be run with a word processor. Soon it will become an HTML file to be run with a browser. When you code, you’re writing data to be executed as a program.

The file extension tells Windows what programs to use for data files. It also tells Windows when the file is itself a program, i.e., an .exe or a .com. That’s not all. Windows also runs .bat and .pif files. And more.

The malware in question had a .js extension, for JavaScript. I didn’t see it until it was too late. I thought I was opening a compressed text file and when I saw it appear and immediately disappear, I knew something was wrong.

I ran my anti-malware software and did a System Restore to the day before. Didn’t find any problem, but in that blink of an eye who knows. Then I found the offending email and destroyed it.

Here’s the thing. A dozen years ago, .js didn’t exist on my computer. What other, newer languages are being put on our machines to run software we don’t have the first clue about?

There are many ways to run programs in Windows. Double-clicking on icons is the most common. You can also use the Command Prompt. But what if you don’t know what a file is?

The first option when you right-click an icon is Open. Meaning what? Well, it means whatever the file extension tells Windows to do. If it’s this file, it opens the word processor. If it’s a .js file, Windows runs it as a JavaScript program. See the problem?

How do you know whether Windows wants to run a file as a program or open it as data? You don’t unless you know all the file extensions that cause execution—which may be fatal.

Windows not only let’s the file extension dictate the action, it helps in the deception. How? In Windows Explorer, the default hides the file extension. Often, we see little more than icons.

In Microsoft’s push to simplify Windows, they have made us more vulnerable. Every year we have to work harder to protect ourselves. And we pay for the privilege.

What Standards?

Last week, while I was poking fun at Apple’s effluvia, I got bit by an extreme example of just how blind natural selection can be. In this case, the offender was at the opposite end of the spectrum from Apple’s $10,000 watch.

First, I need to remind you of the basics. When a creature fails to survive in its environment, it is not the only casualty. Any other creature dependent upon it, e.g., as food, may also be put at risk.

When a product fails in the business environment, it may not be the only casualty. Obviously, the producer may be at risk, but then the type of failure may also affect the users of this product. Think GM’s recent ignition debacle and the driver-victims.

For another example, recall the pets that died because of China’s faulty pet foods. Unfortunately, the list is endless as are its unintended victims. Now, you can add me to the list—fortunately, I will survive.

Pure blind luck, especially if you believe the advertisements for batteries. They often speak of how critical it is to have working (i.e., long-lasting) batteries. My situation was not critical; it was just TV.

I needed two AA batteries for my TV remote. Got out the new ones I’d purchased last month. Hmm. Had trouble getting them into their slots. Hmm. Then I couldn’t close the door to make the necessary contact. What the hell?

That’s right, sports fans, my brand new AA batteries didn’t fit! These AA batteries, the most standard object in out mostly digital universe, were too big! How could this be? I can tell you in just two words: Quality Control.

In their rush to get products and parts made more cheaply in China, our lazy manufacturers have lost sight of Quality Control. The Chinese may have made the error, but the responsibility was ours.

Back in the mid-60s, I was computer support for Management at NYU’s Graduate School of Business (now NYU Stern GBA). Also there was W. Edwards Deming. You can read about him on Wikipedia.

I can tell you this: at the time, he was a prophet without honor in his own country. His work was looked down on because it wasn’t glamorous, didn’t use the latest mathematical or computing methods. But it couldn’t have been more important.

Why? Deming’s approach to Quality Control was adopted by the Japanese, big time. It was the reason their automakers took over the world market, while ours are still making—poorly—longer, lower, and wider gas guzzlers.

It’s not an overstatement to say Deming was a God in Japan. Eventually, he was recognized here, but our auto makers are still playing catchup. All because no one (even the people at GBA and I was one of them) listened to him in the 60s.

Watch, The Dinosaur

The recent announcement of the Apple watch (iWatch?), is more than just another roll of Apple’s big dice. Even before it comes out, it’s an instant dinosaur. Why do I say this? Just look at the pricing structure: from $350 to $10,000!

Obviously, this is aimed at the richer (and more foolish) of the Apple faithful. Given the price of smart phones, who would pay $350 for a device that does less? There is a price point for a connected watch and it’s $50.

At least that’s what I thought was reasonable when I examined the idea a few years ago. A wrist watch, connected to your smart phone, seemed a viable product—at the right price.

It’s clearly handier to get/make calls, see the arrival of email, not to mention the endless clock functions it could perform. It could, but that’s not what Apple is aiming at. Not at those prices.

Most people are so attached to their wrist watches, they never take them off (not to sleep, and not even to have sex). The fact that most of these watches are self-powered makes this possible.

But Apple’s watch has an 18 hour battery life. How does that fit in with the way most people use their watches? It doesn’t, and that’s the way Apple wants it. Their watches are more symbolic than digital.

Non-connected watches are permanent devices (or jewelry) because people don’t have to think about keeping them going—or even keeping accurate time. I’m sure Apple’s watch will keep accurate time, as does any connected device. How many people can afford the time at this price?

The Apple watch may survive for a while as a status symbol, but as it’s priced now it’s sure to become extinct soon. And cheaper, lesser-known knock-offs will quickly fill its environmental niche—for a lot less money, maybe even for $50.

If some of that description sounds like natural selection, it’s because it is. Despite what many think, evolutionary survival does not mean superiority (except for the moment at hand) or any ladder of progress.

Survival means no more than that: survival. The cost of survival are all the failures, literally mountains of them. It would be better if we thought of evolution as blind selection. Success in this game is built upon massive waste.

I only point this out to suggest we can do better. The phrase to describe how is intelligent design. It can be more efficient and more productive than blind selection, if we design with intelligence and not blindly as nature does.

A Lost Art

Last week’s post concluded by expressing my desire to simplify my coding. That’s not a new idea. For many decades, we used the acronym, KISS: Keep It Simple, Stupid.

I won’t go so far as to say we rivaled the Kiss Army, but there were a lot of us in computing who believed strongly in this approach. And it wasn’t because we thought anyone was stupid. It was meant as a precaution against trying to be too clever.

Clever is how smart people get into trouble. It’s why smart people do stupid things. Clever is the great temptation for anyone who thinks it’s smart to show off. Clever only leads to more cleverness.

The proliferation of new web languages and tools described in last week’s post entices programmers to be clever. They jump to the latest language or tool because it’s easier to perform clever tricks with something new than to take the time to master it.

In fact, the sheer volume of all these new languages and tools itself smacks of cleverness. It’s not that the new can’t be useful—but are they all really necessary? Or are they just another exercise in cleverness?

Cleverness exists primarily to stroke the programmer’s ego. But the goal of programming should be to meet the end user’s needs. To do that well in code, as in writing, requires the ability to revise and improve.

The computer industry calls fixing code maintenance. Maintaining someone else’s code is easiest when it’s straightforward, when it’s easy to read. In six months, that someone else could be the original coder.

Being able to fix code (my aim in the last post) greatly depends on keeping it simple. Choosing the right language and tools can help, but nothing works as well as simplicity. To paraphrase Hippocrates: First, know when not to be clever.

Society tends to mistake clever for smart, especially where technology is concerned. Choosing clever creates obstacles to more important choices, primarily common sense. Technology without common sense has proven to be highly dangerous.

Cleverness is good at making golden eggs, but such geese rarely last. Society’s day to day successes depend far more on common sense than cleverness. Society’s long-term survival depends upon wisdom, another choice subverted by cleverness.

Tower of Babel

Earlier this week I went online to find a book to help me upgrade my web skills to XML. Couldn’t. Simply put, I was both overwhelmed and astounded at the sheer number of books.

Not only was there a forest (trees turned into paper) of books featuring XML but many more related to XML. The scary part was these were dwarfed by books on newer web tools and languages.

When I speak of trees turned into paper, I’m talking about the many books that are thousand pages and upward. And no, I didn’t buy anything. But I discovered I had a book on HTML, XHTML, and XML (at 1107 pages).

That book is what the future looked like back in 2002 (it’s copyright date). It felt safe, because it connected the beginning, HTML, with the futute, XML, using the bridge of XHTML.

That was then. Now, I have no idea. That future has been blown to bits (sorry). Instead, we have this explosion (sticking to the metaphor) of new web languages and specialized tools.

However, this pandemonium of web languages and tools goes a long way towards answering one of my most frequently asked questions. Namely, why is so much programming so bad?

Programmers simply aren’t getting the opportunity to master anything. Decades ago I was told it took a full two years to be proficient in any programming language, and nothing I’ve seen since disputes this.

In other words, most code is produced by novices in that code. Regardless of years or even decades of experience, these programmers are relative beginners in their current language.

These web languages and tools proliferate more and more, making them less and less effective. This in turn becomes a cause of proliferation: our current language or tool isn’t as productive as we had hoped so let’s switch to (or even create) a new one.

Adding to the problem is the seemingly endless expansion of web browsers and their limitless versions, and the attendant difficulty of programming them to meet all the W3C language standards. Not so much a Herculean task as a new Circle of Hell.

As for me, I’ll look over the book I have and then decide. My main reason for upgrading my web skills was not so much to be current, but to use a better, cleaner—and thus simpler—language. Something more consistent and therefore easier to fix.

Post Navigation


Get every new post delivered to your Inbox.

Join 33 other followers