Digital Minefield

Why The Machines Are Winning

The Power Of Virtual


Last week’s post spoke about computers as amplifiers. What computers do, is make things more so: bigger, faster, prettier, more complex, more simple, and both easier and harder to use.

The world’s information at your fingertips could be the ultimate education—or the biggest time-waster ever. Computers can make you smarter or dumber. It’s all in how you use them.

Powerful computers affect our decisions and our behavior. The virtual world may look safer than the real world, but people get into trouble when they become too involved in the virtual.

Virtual power makes computers ideal for learning, a safe place to practice before taking it to the real world. But that power can be seductive, making you afraid to leave the safety of the virtual.

Instead of preparing you for the real world, the power of the virtual can make that journey appear too daunting. Instead of a ladder to reach greater heights, virtual can be a stronger cocoon.

People are slow to see when quantitative change becomes qualitative. When Cyberbullying first appeared, those people said it was no different than it had been before computers.

They were wrong. There is more Cyberbullying now because computer power makes it easier. In addition, the Internet readily provides anonymity, even for those who never learned to spell it.

Since the invention of fire, people have believed that the solution to problems created by technology is more technology. Those who create new technologies are usually its greatest advocates. The Latin phrase for this is cui bono (who benefits).

However, the answer lies not in what we have or what we need. It resides in how we use what we have. It always has. Do we have the will to take action or do we let technology decide?

Power: Plus and Minus


I found this piece I wrote four years ago about what you need to know about computers. Still makes sense to me.

A computer, as hardware, is no more than potential. Combined with software it becomes a system, a tool of infinite possibilities. Hardware interacts with software (which controls the hardware). The total system can be productive, or it can be frustrating.

Nothing about computers is intrinsic. Their hardware and software reflect the intentions and idiosyncrasies, abilities and stupidities, prides and prejudices of their designers and builders.

Sadly, some software designers forget that users—first and foremost—expect computers to act like machines. Users count on computers to be consistent, predictable, and reliable. The less machine-like a computer is, the less effective it is as a tool.

Writing easy-to-use software is as hard as writing easy-to-read prose (see Brenda Ueland, If You Want to Write). It takes a lot of work to make a product transparent, so that only the objective (i.e., the author’s ideas, the software’s purpose) is apparent.

While computers are extremely complex, they are neither mystical nor mythical. They are artifacts made by humans to be useful to humans. Think of their ability as mental leverage.

Physical leverage amplifies physical strength, whether smart and skillful or dumb and brutish. The computer’s mental leverage amplifies our mental strengths and weaknesses: from world’s greatest chess player to chaos beyond human comprehension.

The old conundrum of infinite monkeys banging on typewriters, now fits in the palm of your hand. In infinite time, will there be a Hamlet? In real time, it’s just waste paper and dead monkeys.

Try to remember the device you hold is just a computer. It’s very powerful, but it’s not the next Shakespeare. It’s fine to be awed by the power, but don’t forget that power can be perilous.

Closed Door Policy


When is a digit not a digit? When it’s a foot. That is, a digital foot in a metaphorical door to your computer. You’ve seen this image in old-time cartoons, from the days of door-to-door sales.

It’s most popular embodiment was probably the movie, The Fuller Brush Man, with Red Skelton (1948). Door-to-door sales are long gone, but every company (especially those you’ve done business with) wants a foot in your computer’s door.

Companies give away free software because it gives them access to your computer. They use this opening to stay connected. This foot in your door is their path to future sales. Everyday. All day.

To sell, they have to knock on your door. To do this, their software interrupts your work. They don’t care, because their goal is to maximize their presence on your machine. And mine.

Our goal is usually simpler, like trying to maintain workability. In other words, your aim and mine is to satisfice, to keep the machine running well enough to do what we need to do now.

What we really need in this game, and do not have, are choices. What they want is for us to always accept their latest and best. Why won’t they let us select the level of perfection we desire?

I would hazard a guess that your wants are similar to mine: leave me the hell alone until I have to make a change (fix an immediate security breach). Leave my good enough alone.

I want them to get their intrusive attempts to maximize their position on my machine the hell out of my face. If they want us as customers, why shouldn’t we expect them to treat us better?

They seem to forget (or choose to ignore) that they are not the only company putting their corporate feet in your computer’s door. They fail to realize the cumulative effect. All these intrusions only produce increased customer resistance.

I may not be able to keep them from connecting to my computer, or stop them from knocking on my door. But I don’t have to answer. Ever. Neither do you. I just wish I could slam my door on their feet—like they did in those old cartoons.

Upload What?


I was ignoring a noisy and hyperactive movie ad on TV, when it said something that got my attention. It was a comment about a character wanting to escape by “uploading his consciousness.”

I’ve been seeing this phrase for well over a dozen years. I read it in magazine and newspaper articles quoting scientists, and in books and journals. I’ve heard it often at conference talks.

If fact, the phrase is so common, I’d be very surprised if you’ve never heard it. On the other hand, I’m equally sure you haven’t heard any detailed discussion as to how it could be done. Nor have you heard any serious debates about its plausibility.

What does it mean to upload one’s consciousness? Clearly, the intent is to perpetuate the person in some non-biological digital form. This raises big questions, but I’ll start with what and how.

The first problem with what is, does anyone seriously believe a person is only their consciousness? Do they think they could omit the non-conscious part of their mind and still be a person?

Limiting a person to nothing more than their consciousness is a shortcut to insanity. Minds cannot function without sleep (or dreams). All right, give them the benefit of their intentions and assume they mean all of the mind when they say consciousness.

The how is the bigger problem. Exactly how to specify what it is they want to upload? We know a great deal about the brain, but next to nothing about how the mind arises from the brain.

This is so difficult, it is known as the hard problem. Okay, let’s skip it and just try to upload the brain. Can we digitally replicate a fully functioning brain? (And hopefully mind will follow.)

We know a lot, but most of what we know about the brain comes from monitoring its electrical activity. Close, but no cigar. The brain’s electricity is produced by chemical reactions.

Chemistry, specifically calcium ions, are the clue to the functioning of the majority of our brain’s cells. These are the glial cells, the brain’s white matter. Here, we know very little, because chemical reactions are harder to observe than electrical.

Having great computing power doesn’t mean we can do anything. We can’t digitize what we don’t understand in sufficient detail—not consciousness, or mind, or even brain.

Without Buyers, the Market Dies


Technology, any technology, is not as good as the people who design it, or the people who build it, or the people who sell it, or the people who install it. Technology is only as good as the people who maintain it. Often, in today’s world, that’s you and I.

As devices multiply and become more complex, we are less conscientious than we need to be. It gets worse when we have to rely on today’s English-not-first-language documentation. Any wonder we don’t always follow instructions correctly?

It would be different if these devices had a long life span, changing little over time. But then manufacturers couldn’t sell enough to stay in business. They know this and have their tricks.

The solution for the manufacturers is to make and sell products with a short life span. One method is planned obsolescence. Another is inducing users to discard what they have in favor of newer technology. This is the path of today’s digital technology.

Anticipation of “the next great thing” is a slogan originated by Apple, Now it’s so deeply engrained in our digital culture, most people don’t recall its origin. All they know is they want it, now.

Remember when automobiles advertised longer, lower, wider, etc.? Back then, this was called conspicuous consumption. We were told it was bad to buy things we didn’t really need.

Buying those old cars with excess chrome and ever-higher tail-fins may have been bad, but it didn’t hurt the economy. Why? Because there has always been a robust used car market. After the initial purchase, those cars were sold over and over again.

Not so with “outmoded” digital devices. Not only is their initial first owner lifetime very short (far shorter than those cars), but there is practically no resale market. Now the cycle is buy, use briefly, and toss. Then buy “the next great thing.”

Every little purchase turns the great wheel of consumption driving the economy. Yet, business continues shedding jobs, forgetting that without consumers there is no consumption. And without all this consumption, there will be no economy.

Division of Labor Lost


“. . . the best we can do is to divide all processes into those things which can be done better by machines and those which can be done better by humans and then invent methods to pursue the two.” —John von Neumann

Instead of asking how to divide labor, businesses are misusing computers by trying to computerize everything. They seem to believe that everything that can be computerized, should be.

They assume, regardless of the initial cost, in the long run it will be cheaper for a machine to do it than a human (no training, benefits, pensions, etc.). Not to mention a 24 hour workday.

Look at cars. Much of the production and assembly is now done by programmable robots. Computers assist in every facet from design to engineering, accounting, and sales. But computers don’t actually design, engineer, account, or sell. Humans do.

Cars are very high-tech devices, requiring very precise manufacturing. Contrast that to logging: felling trees, trimming them, and transporting the resulting logs to the sawmill. This work required many people, called lumberjacks. Not any more.

Check out the logging robot on YouTube. It does the work of dozen loggers. And it does more: works on steep terrains and it doesn’t need access roads. The robot logger is very impressive.

The industry could follow von Neumann’s advice, using this device only where it’s far superior to men. I don’t see that happening. What I see are laid-off loggers in a bar, ironically singing Monty Python’s “I’m a lumberjack, and I’m OK.”

One robot on an assembly line replaces many human workers. Humans make and run those robots, but the number is only a small fraction of the number of workers replaced by the robots.

This is more efficient for the company, but how will the displaced workers make enough to buy those robot-built cars? Robots make manufacturing more efficient, in part by eliminating jobs—but that also eliminates consumers.

If there aren’t enough consumers able to afford the products made by the robots, won’t the robots be out of work, too? How about companies that bought the robots? How will they survive?

Debased Data


Last weekend, before going to the library, I used its catalog to find audiobooks that would be on the shelves. I made a short list (5 books) of what I was interested in—but all but one weren’t there. After some help from the librarians, I found out why.

Seems my search of the library’s database (the catalog) wasn’t accurate because every time I modified the search, it reset some of the search parameters. All the time I thought I was narrowing my search, the system was actually broadening it.

Why do I think this is worth a post? The reason is simple: the library catalog is only one of many databases I regularly use that rarely surrenders its information without a fight. Extracting data from databases is becoming more and more difficult. Why?

To provide some perspective, I constantly battle with both Amazon’s database and the MyHeathlEVet database for Veterans. (I’ve given up on the Library of Congress Talking Books database. It takes the prize for the worst I’ve ever seen.)

I’m not new to databases. Back in the 60s, I wrote a natural language retrieval package for the Standard & Poor’s database. In the 80s, I wrote a video store system in the dBbase language.

So I know a good database system when I see one. I use the excellent CommissaryRewards.gov site once or twice a month. You clip coupons online, which are accessed with their card at checkout. It’s from the Defense Commissary Agency (DeCA).

The library retrieval is so lame, you can’t sort your results by author. Of course, the library arranges its books on the shelves by author. Imagine not being able to look for fiction by author!

Yet, as lame as that is, it’s good compared to the government’s MyHealthEVet database. This monster has no clue how to update, and its output looks like it came from the 50s.

I could go on for hours about these problems, but I’m sure you’ve seen your share. I’m not writing this to bitch or point fingers. The question I’m asking is, how did things get this bad?

In the 50s and 60s, we were still learning how to efficiently create and search databases. Techniques kept improving over the decades, but not as fast as the hardware. Now the hardware is so fast, no one remembers the improved methods we learned.

It looks to me as if programmers now assume hardware speed and massive storage will solve all their problems. It’s like they’ve upgraded to a Ferrari, but removed the steering wheel.

I, Robots


One thing typifies almost every confrontation, interaction, or relationship between humans and robots from the early days of Science Fiction until the present. The encounter is one to one, i.e., both humans and robots are portrayed as autonomous.

We’ve known for sometime this will not be the case. That is, we know it in our bones (so to speak) but not in our consciousness. There is little discussion of an individual human dealing with a solitary robot connected to the hive mind of other similar robots.

The Internet grew to ubiquity over ten years ago. Two-way wireless communication has existed for over a hundred years. Everywhere, we are surrounded by people interconnected, tethered to The Cloud. Why expect robots to be different?

Perhaps the first image that comes to your mind is the Borg Collective from Star Trek, The Next Generation. That is most definitely not the way it will be. The lone robot may act as though it is an individual being, when in fact it is part of a hive.

While this robot we face may appear to be autonomous, you can bet it’s in touch with its fellow robots, via The Cloud. What one knows, all know. What one sees, hears, remembers, learns, etc.

It might occur to you at this point that such a connected robot could easily have advantages over a human. If you think I’m exaggerating the intelligence gained by pooling vast amounts of data, then you’re unaware of how Google Translate works.

How we will relate (in what appears to us to be a one on one situation) to such a connected robot? No one can predict this or even imagine it. If the human detects the presence of the hive, he or she will be hard-pressed to feel the equal of the robot.

You may feel superior dealing with one robot. In the near future, the robot before you may have the mind of a thousand, or ten thousand, or more. How will you feel then?

The Eye Of The Beholder


Children believe their dolls and action figures have personalities and feelings. We accept it when they project even more “life” onto animated robotic toys. But in both cases we expect them to outgrow this phase. After all, it is childish to believe such things.

Yet, robots are becoming more life-like every day. Inevitably, the people who interact with them will not only see them as “alive” but very much like us. And we will not find it childish.

Part of our mind will know they are only machines. Another part of our mind will be inclined to treat them as equals. For some of us, they will seem greater than equal—our superiors.

Such a variety of reactions inevitably will generate debates about whether these robots are alive. Most people won’t realize the reality they’re debating is only their perception.

You may feel your relationship to your robot is real, but it’s only a fiction you’ve created. As in many human to human relationships, we tend to project the qualities we desire on the other. So it will be with human to robot relationships.

Knowing the underlying psychology, it’s easy for the robot-makers to create software that takes advantage of how easily we are fooled. Or more accurately, how easily we fool ourselves.

Not incidentally, the better the software is at deception, the more money it will make. The average robot owner will not realize how simple it is to write such software. In fact, it is far simpler to write than the software I’m using now, i.e., this word processor.

The reality of programming for human-robot relationships is that it is less work—and more profitable—to write software to help users deceive themselves as to how the robot “feels” about them.

Given this reality, which type of software can we expect to be written for our robots? You may imagine the future will be new and wondrous, but I suggest you reread Brave New World.

What a Computer Is—and Is Not


The digital computer began as a mathematical abstraction. Geniuses like Alan Turing and John von Neuman not only invented the abstract concepts, they also built actual computers.

Unbelievably crude by current standards, these early computers were not only proof of concept, they did important work in WWII. It was like flying a paper airplane across the ocean.

Turing’s abstraction became known as the Universal Turing Machine. It specified all digital computers. Conceptually, one digital computer is really any digital computer—it can do whatever a digital computer is capable of doing.

When I hear some idiot (Jonathan Swift called them Yahoos) say a particular computer can’t do some thing, I think, “What you mean is you don’t know how to instruct it to do that thing.”

You can say a computation takes too long or costs too much (or is not computable—another concept from Alan Turing). But you can’t offer any other excuse, except “I don’t know how.”

In addition, you can’t say a computer plays chess as a human does. It “plays” to win because it’s programmed to; it has no motivation to win because it fears losing—or enjoy winning.

A powerful computer with great software (with a team of programmers and chess experts) can “play” superior chess. Yet it can’t appreciate a brilliant chess move, even its own.

More importantly, it will never make mistakes as humans do. We have to tell computers (and pets) about mistakes, so they will not repeat them. The “oh crap” moment of a silly mistake is just as valuable as the “aha” moment of a brilliant insight

A computer is a very great invention, the greatest of all—but it is still only a thing. Mathematically, it is as great as Einstein’s E=MC2—but both are still only abstractions. Like Archimedes’ lever, a computer can move the earth, but it’s still just a tool.

No matter how powerful a tool, no computer has the potential of a human infant—who may be our next Turing or von Neuman. A computer can do anything—anything a machine can do.

A computer can’t do anything a human can do, because we don’t know everything humans can do. We have survived, coping with the unexpected, by combining creativity, imagination, and inventiveness. It’ll be a while before computers hit that trifecta.

Post Navigation

Follow

Get every new post delivered to your Inbox.