Digital Minefield

Why The Machines Are Winning

Tower of Babel

Earlier this week I went online to find a book to help me upgrade my web skills to XML. Couldn’t. Simply put, I was both overwhelmed and astounded at the sheer number of books.

Not only was there a forest (trees turned into paper) of books featuring XML but many more related to XML. The scary part was these were dwarfed by books on newer web tools and languages.

When I speak of trees turned into paper, I’m talking about the many books that are thousand pages and upward. And no, I didn’t buy anything. But I discovered I had a book on HTML, XHTML, and XML (at 1107 pages).

That book is what the future looked like back in 2002 (it’s copyright date). It felt safe, because it connected the beginning, HTML, with the futute, XML, using the bridge of XHTML.

That was then. Now, I have no idea. That future has been blown to bits (sorry). Instead, we have this explosion (sticking to the metaphor) of new web languages and specialized tools.

However, this pandemonium of web languages and tools goes a long way towards answering one of my most frequently asked questions. Namely, why is so much programming so bad?

Programmers simply aren’t getting the opportunity to master anything. Decades ago I was told it took a full two years to be proficient in any programming language, and nothing I’ve seen since disputes this.

In other words, most code is produced by novices in that code. Regardless of years or even decades of experience, these programmers are relative beginners in their current language.

These web languages and tools proliferate more and more, making them less and less effective. This in turn becomes a cause of proliferation: our current language or tool isn’t as productive as we had hoped so let’s switch to (or even create) a new one.

Adding to the problem is the seemingly endless expansion of web browsers and their limitless versions, and the attendant difficulty of programming them to meet all the W3C language standards. Not so much a Herculean task as a new Circle of Hell.

As for me, I’ll look over the book I have and then decide. My main reason for upgrading my web skills was not so much to be current, but to use a better, cleaner—and thus simpler—language. Something more consistent and therefore easier to fix.

Who’s Watching You?

The other day I was in my Windows Control Panel trying to figure out why my closing sound wasn’t playing. (I did; something had changed my settings.) I happened to notice an icon for Flash Player. Huh. Out of curiosity, I opened it.

There I found a window labeled “Camera and Microphone Setting by Site.” Huh? Here was a list of 46 “Previously visited websites that have asked to use the camera or microphone …”. My camera? My microphone? What for?

The same message informed me that I could “allow or block the use of the camera and microphone by specific sites.” I could also “Remove a site to delete all settings and data for that site in Flash Player.” Player? What does a player have to do with my camera or microphone?

There’s a simple answer: Flash Player is what controls your video (from your camera and mic) as it goes out on the Internet. As in when you use Skype or other video phone programs.

But look at this (in Win7): Open the Control Panel, click on the Flash Player Icon to get the “Flash Player Settings Manager”. Select the tab “Camera and Microphone”. Open it and you’re asked to either “Ask me when a site wants to use the camera or microphone (recommended)” or “Block all sites from using the camera and microphone”.

Apparently, “ask me” is the default setting, inasmuch as I never opened this before. But . . . what does it mean this was the setting and I was NEVER asked? Exactly what happened when these 46 sites requested access? I don’t know.

Below these two options, is a large button labeled “Camera and Microphone Settings by Site . . .”. Click on this and you’ll see the window described above (mine showed 46 sites).

Flash Player for video phone is one thing, but what is this stuff? If you’re like me and have a web cam, you set it and leave the device on (which by the way includes its microphone). I don’t Skype very often, so I tend to ignore it.

What else am I ignoring that I shouldn’t? What else is concealed in the Control Panel that might be equally ominous? And why are these things concealed? Why would I—or anyone else—ever want a site to turn on the camera or microphone?

The other thing I don’t get is how much of this control of my devices is being orchestrated by Microsoft and how much by Adobe. Or are they in cahoots? Or is it the government? Or some other bigger brother?

Whatever. Meanwhile, I’m removing all sites, data, and settings, and changing my option to Block. Yet I have to ask—as you should—just why do these sites need any access to either my camera or my microphone? Ever?

Smart Streets?

Last week’s post asked how smart were these automated cars being hailed as saviors of our highways. I asked many questions, all presuming these cars were autonomous—because that’s how they’re being promoted.

Well, they’re not. Basically, they’re mobile computers and no computer these days is independent of the Internet, or if you prefer, The Cloud. Even your stationary desktop computer gets constant updates from its various hardware and software makers.

Any automated car will be no different and therein lies a whole new set of questions. To what degree are they independent and to what degree are they connected to (controlled by) The Cloud?

Aside from the usual updates for its hardware and software, an automated car needs current information about the streets it’s navigating, not to mention its destination. (Hence the title.)

These cars need The Cloud for updates about traffic, road conditions, and even the roads themselves. It might be possible to load all the roads into the car’s computer, but is it likely?

Point being, there are continual updates to the whole system of roads, but only rarely to your localized region of actual driving. Updating a car with information on all roads is wasteful, and it could be dangerous.

How to update what data will determine the dependency of vehicles on The Cloud and therefore the Internet. If connections go down—even for a minute—it doesn’t mean one car is on its own. Rather all cars in this vicinity using the same connection will be left on their own. This gives us new questions.

Can these automated vehicles be sufficiently autonomous if they lose their Internet connection? Think fail-safe. And don’t assume that simply stopping (or even pulling over to the side of the road) will always be the right option.

The makers who propose these vehicles are big on showing us how these cars avoid obstacles. But the real value of automated cars is controlled traffic flow. That takes coordination, which raises a new set of questions.

There’s the problem of autos from different manufacturers. Or will the government step in and choose a single supplier, or at the very least a single computer system to be used by all?

If there are different manufacturers, will they use the same data? Supplied by whom? (Is all this just a power play by Google?) If they do use the same data, will they all update at the same time?

The more I look at this, the more questions I have. My biggest question is: Are the people selling this concept and those who will have to approve it asking the same questions?

Street Smarts?

What is smart? Does the automated car they tell us is almost here qualify as smart? It’s pretty smart if it can steer itself and avoid obstacles. It’s very smart if it can recognize lane markings and traffic lights. How about reading street signs?

We know cars are smart enough to park themselves. What about NYC’s famed alternate side of the street parking? How smart does this car have to be for you to trust it with your life? The lives of your loved ones?

Living creatures are smart because they adapt to changes in their circumstances, e.g., the three-legged dog. Computers (and other machines) cannot. They are limited to their programming,

Can cars be programmed to be better drivers than humans? Not better than any human, but they can be programmed to be better than the worst human drivers. For example, they will never be distracted.

So far, I’ve been asking questions about the skill of automated cars versus humans. Skills can be programmed. The real question we should be asking is not about skills but judgment.

Can automated cars make decisions as well as humans? Can the designers of these vehicles anticipate every possible situation the car might encounter? What about life or death decisions?

I’m not saying humans don’t make mistakes. Tens of thousands of drivers still choose to drive impaired. Even more can’t ignore phone calls or texts. And texting is eight times more dangerous than driving drunk.

Automated cars won’t make those mistakes. The problem is, until we have years of experience and millions of miles with these cars, we won’t know the mistakes they might make.

Like drivers, programmers are not perfect. Unlike drivers, programmers can’t react to situations. They must anticipate them, instructing the machine accordingly. Can they foresee everything?

We encounter faulty programming everyday on our devices. (If you don’t, you’re not paying attention.) Programming a car to move safely in traffic is far more difficult than programming a stationary device.

Learning to drive doesn’t end with getting a license. Experience is what tells you someone will turn even if they don’t signal. Or that they won’t turn if a signal’s been on for blocks. How much experience will the programmers have?

Enchanted Objects

In David Rose’s Enchanted Objects, he posits four technological futures. I wrote about the first of these, Terminal World, a few weeks ago. He says it’s about “glass slabs and painted pixels.”

The second of these futures is Prosthetics, where we transform into our “Superhuman selves.” The third is Animism, a world filled with “swarms of robots.” (See last week’s post.)

Finally, he offers Enchanted Objects, a world where “ordinary objects are made extraordinary.” Not surprisingly, Rose is a big deal at MIT’s famed Media Lab and is immersed in the latest technological gadgets. Obviously, this is his preferred future.

The book is subtitled, “Design, Human Desire, and the Internet of Things,” but the last is its true focus. Things, says Rose and many others looking to shape our technological future, will be connected via the Internet to other things and especially to our computers, tablets, and smart phones.

And I’m sure they will be. As to whether this will be the dominant technology of the future, I have my doubts. Although the author favors the term “enchanted” to describe these, I’m sure we could all agree these are enhanced objects.

Like any added feature to any product, only the market can judge its success or failure. The key question for Rose’s preferred future is, will people pay the additional cost?

No matter how much a feature or set of features adds to a product, will enough people buy it if there’s a comparable product with less features for less money? In other words, enhancement is a luxury, not a necessity.

If you press Apple buyers, they will say its products are enchanted. Apple’s last quarter was the most most profitable of any company. Ever. More than half of that profit came from one product (iPhone) in one country (China).

This success has more to do with Apple’s image and marketing (and Chinese culture) than the iPhone’s features and price, which are comparable to other smart phones. Buyers may have desired enchantment, but didn’t have to pay more.

While Rose has a vested interest in a future filled with enchanted objects, others are invested in each of the other alternatives he presents. The inevitable result will be a mixture of all four.

It’s easy to see the trend to glass slabs. The future of prosthetics is less clear, as is that of robots . Even less obvious is how they all will join The Internet of Things. Some things may succeed as Enchanted Objects, but I don’t think they’ll dominate.

The Humanoids Are Coming

There are three major needs for humanoid robots. In order of likely implementation, the they are companionship, representation, and embodiment. The first may seem obvious, but the other two require considerable elaboration.

Companionship (and beyond) is already being marketed for robots that physically resemble humans. However, a companion that too closely resembles a human could create legal complications, e.g., marriage.

The owner of a companion robot wants to experience the human resemblance. To anyone else, the companion must be perceived as a humanoid robot. What will be the technological solution?

Humanoid robots as representatives are different. These do not simply function as servants, but rather as agents for their owners. Again, such devices are already on the market, e.g., the Double Telepresence Robot

While far from humanoid (it is little more than wheels and a vertical post carrying an iPad), the Double demonstrates the minimum necessary package to function as humanoid. Although limited to what the iPad can do, the Double can take your place at meetings, conferences, and similar gatherings.

The third category is far more problematical, both in implementation and actual potential. Embodiment means the humanoid robot embodies a person’s downloaded consciousness.

The uploading of consciousness may be a goal for some, but the methods to achieve it are still too vague to be assigned a probability. However, if it could be accomplished why not an occasional downloaded embodiment?

But how close to human form does it need to be? The embodied consciousness may want an exact duplicate of his or her former body. What is its legal status? Is it robot or artificial human? Does it have the rights of the embodied?

We are more comfortable attributing human characteristics to non-humans than dealing with things that may or may not be human. Both psychologically and legally, we need to know what’s human—and what’s not.

If we can’t prevent the robot makers from making robots that pass for human, why not pass laws requiring every robot to have a transponder (like aircraft) that identifies them as a robot? Of course, we’ll need to have an app to detect them.

Why Humanoid Robots?

When robots appear as personal servants, what form (or forms) will they take? We already have many robotic servants—but not in human form. However, eventually human-like robots are inevitable.

Some robotic tasks are easily acceptable when performed by a non-humanoid robot, e.g., the Roomba. (It makes much more sense than a humanoid maid pushing a vacuum cleaner.)

Similarly, a robot lawnmower need not be humanoid, but a robot dog walker might. The future form of robotic servants will be mixed according to buyer’s preferences. Servants (and by extension, slaves) will be more easily commanded if they have human form.

The word anthropomorphize has been in use some 170 years, says the Merriam-Webster website. It also says the word means “to attribute human form or personality to things not human.” It predates the Greeks, who probably used a different word.

Anthropomorphize is what we do, what we’ve always done, without any need to give it a label. It’s part of our nature to attribute aspects of that nature to non-human objects and beings.

Not only do we grant them life and will, we give them personalities. We go so far as to attribute sexual orientation to many objects, e.g., ships are female. Early autos were called stubborn.

Beyond our proclivity to anthropomorphize, as Freud elaborated, we project our feelings, beliefs, and assumptions onto others. Different from anthropomorphism, projection can be subtle and more common.

In the mid-sixties, a computer program named ELZIA was written by Joseph Weizembaum to study natural language processing. It simulated the basic responses of a therapist.

The degree to which people, even people who knew it was a computer, immersed themselves in this interaction was astounding, and more than a little disturbing to its author.

In this very crude imitation of a therapy session, people not only projected a therapist’s insight onto the program, they told it incredibly personal details of their deepest secrets.

If we are comfortable treating objects as though they were human, why not give them human form? The real question, is how human? Can we risk robots capable of impersonating humans?

Rulers and Slaves

As power over reality continues to coalesce, the number of those in power shrinks. Their problem becomes how to carry out their will in the real world. They could hire people. They won’t.

What they will do, what they’re doing now, is building robots. Not only do robots function in the real world, they can perform tasks far beyond human abilities. They could even be recycled.

You may have read online chatter about rights for robots. These people object to slavery in any form, even for machines. But until a robot objects to its treatment, such idealism will remain chatter.

However, it is possible to build a robot that appears to exercise free will. Of course, the people in power will try to prevent any such creations from muddying the waters of robots as dedicated servants.

One method of control is robot police: robots policing the building of robots. Since those in power have specific needs and aims for their robot slaves, they must control all robot construction.

In this scenario, what are the odds for successful rebel robots? As robots becomes more sophisticated, it’s less likely free-lancers could produce a sufficiently complex robot capable of rebellion.

For those in power, this is a robot-based utopia. What it is not is an open society, in the Popperian sense. It will be a closed society with all-powerful rulers. Sound totalitarian? It is.

The more control we yield by choosing the virtual over the real, the more likely are such scenarios. If we want options,we need to encourage real world innovators with their own unique aims and goals.

Some of us will choose reality over the virtual, and thus become a third class between the one percent and the ninety-nine. Will we ally with the fourth class—those who cannot afford the virtual? Will we be have-nots, because we choose not to have?

Or will the one percent, to maintain control, offer free virtual to those unable to afford it? If so, what might they offer to keep us from choosing the real? And what if we decline? These rulers want only two classes of humans, plus robots as servants.

Who Rules Reality?

The more we live out lives virtually, the less control we have over reality. This idea is not new. It is at least as old as the short story “The Machine Stops” by E. M. Forster—written in 1909!

I could rephrase the thought by substituting the word “conveniently” for “virtually.” Computers provide the convenience and do so more powerfully, and less expensively, through the virtual representation of reality.

Convenience is what we desire; virtual is just the means for achieving it. Convenience, with all kinds of promises of pleasure and power, is what the makers of glass slabs are selling.

Convenience comes in other forms, for example automated cars. These will take you from A to B and do all the work. What’s more convenient? Obviously, simply not going from A to B. That is, being able to visit B virtually, without ever leaving A.

Will virtual visits beat out automated cars? Who knows? There’s lots of money to be made selling new cars and far too many people still think cars are personal magic carpets.

On the other hand, no one has done a really good job of providing an enhanced virtual shopping experience. The software is much easier than automating cars. But who’s the client? Not malls.

Why would any chain of department stores want to obsolete its brick and mortar investment? At least not until someone figures out how to synergize virtual and real shopping. Until then, look to Amazon’s competitors to offer better virtual shopping.

If that seems unlikely, think of all the specialty stores and boutiques that could expand their potential customers by offering a more realistic virtual shopping experience. Would these combine into virtual malls?

Regardless of how much of our lives will be lived virtually, one aspect of providing that virtual access will always be real and never virtual. In word: infrastructure. This is the real world component of whatever miracles computers produce.

Whether roads for automated cars or Internet carriers for virtual experience, infrastructure must be built, maintained, and upgraded to produce real world results. Think delivery of Amazon packages.

Yet, the details of infrastructure are invisible to those of us tethered to our glass slabs. We may have convenience to the nth degree, but we don’t know who is behind the curtain, controlling the real world infrastructure that makes it all possible.

Infrastructure, being real, costs real dollars. Those costs get passed on to users of the infrastructure, to us. The rulers of reality set the prices, get government to build infrastructure, and collect from us directly and through taxation.

The Universal Tool?

Whether you’re using a smart phone, a tablet, or a flat screen monitor, when it’s dormant they all look alike. If you stretch your mind just a little, you’ll realize they look like the granite slab in 2001: A Space Odyssey. Maybe you can even hear the sounds.

Coincidence? Not if you believe in Arthur C. Clarke’s prescience. Not if you’re aware that all our tools are turning into apps. It’s the convergence of everything into computers.

Many see this as a trend of convenience, the multiplying of computer power to achieve all our needs. We can be sure the software and hardware providers see the value for their businesses.

Individuals and businesses may benefit, but who speaks for humanity? Who will warn us of what we’re losing by taking the path of One Tool Fits All Needs. If we humans are not toolmakers, what are we?

Once the development of tools was synonymous with specialization. Now, as our tools become apps they are homogenized, more like each other than something with a special purpose.

One of humanity’s greatest tools was the pencil. Will tomorrow’s double-thumb texters know how to pick one up? Not only is cursive dead, drawing with pencil and paper is anachronistic. Tools are diminished without the resistance of the medium.

Tools are psychologically defined by their affordances: hammers look for nails, knobs twist or push or pull, knives cut, shovels dig, etc. Affordances describe the connection of mind with hand and tool.

By reducing the affordance space to the homogeneity of screen and keys, mind-hand coordination shrinks to a bare minimum. Apps are a poor substitute for tools that evolved over many millennium.

We can make anything with 3D printers, but where is the hand of the maker? We still create objects but the art of sculpture will disappear. Apps make with the mind; hands are becoming superfluous.

No essential human art is reducible and still remains truly human. Creating art with apps is a virtual process. Nothing real is happening. The art of conversation cannot be reduced to faces on a slab.

While the slab is effectively infinite, the apps that fill it are only virtual tools. They are convenient and they may satisfy. But the real world will still be ruled by real tools—like guns and bullets.

Post Navigation


Get every new post delivered to your Inbox.

Join 29 other followers