Digital Minefield

Why The Machines Are Winning

Archive for the category “2. Fall of Humanity”

Technology’s Fatal Flaw


Two ideas came to me last week and I struggled with them until I realized they were both the same idea, just expressed differently. For this post in Digital Minefield, it is expressed as “Technology’s Fatal Flaw.” For my post in Pelf and Weal, it is called “Gambler’s Paradise.”

Daily in the news we hear of technological failures. Only the problems are attributed to separate specific sources, like drones and abandoned mines. No one sees the risk-taking of technology as the common element.

It’s easier to blame technology, that is new technology, for society’s inability to control drones. It’s not so easy to see that exactly the same moral approach has lead to a quarter of a million abandoned mines here in the US.

Where do we draw the line between scientific experimentation and technological innovation? In the eighteenth century, chemists rushed to discover new elements. Often they did so by smelling the results of chemical reactions. It killed some of them.

Many, however, got rich. In England, the best were made peers of the realm. Most were not simply chemists but also inventors, lecturers, and famous authors. We remember the successful ones and forget the risks they took.

No one has forgotten the risks taken by Marie Curie. The radioactivity she discovered—and that killed her—made her famous in her lifetime (two Nobel prizes). We forget such risk-taking was the norm.

Most of the risk-taking in the days of get-rich-quick mining centered around success or failure. Less discussed were the actual physical dangers. Never mentioned were the costs to posterity.

This was true for the precursors of the chemists, the alchemists, so it remains true for their modern day atomic wizards. Society has committed to the risk of nuclear reactors without any viable solution for its extraordinarily dangerous waste product, plutonium—deadly for 25,000 years.

It is obvious that any new technology (and science) has always been ahead of laws to regulate it. By definition, if it’s really new, how could there be laws in place to deal with it? We have no answer, because we are technology’s fatal flaw.

Who’s In Control?


I’ve written a lot lately about autonomous vehicles, weapons, etc. In the news right now are remote-controlled drones interfering with California fire fighters. What’s the connection? Whether you’re on the annoying end of self-driving cars or a human-driven drones, it’s all the same.

What’s my point? When it comes to laws prohibiting or regulating their actions, devices must be treated based on their actions and capabilities. The state of their autonomy has nothing to do with the case.

This is also true when it comes to finding the culprit. If a device breaks the law (or a regulation) then the responsible party must pay the penalty. If the device is autonomous, it will be hard to determine who sent it on its independent mission.

In other words, before we can have any kind of autonomous device, we need enforceable laws to connect the intent of the person controlling the device to its actions. As you might imagine, this will be more difficult than identifying a person with a drone.

Wait a minute! Can we even do that? If a drone can be connected to a controller device—and then to the person using that device—then why are California fire fighters having these problems?

It seems implausible the drone controller could possibly control more than one drone. However, instead of a unique identifier between each drone and its controller, suppose the manufacturer uses only a hundred unique identifiers for the thousands of drones it makes. Or maybe only a dozen.

In as much as the drone buyers do not have to register the identifier (nor is there a law requiring sellers to keep such records), the only way an errant drone could be prosecuted would be to get its identifier and find its controller.

The latter task requires searching an area whose radius is the maximum control distance for this model. Assuming the drone owner is stupid enough to keep the controller after the drone didn’t come back. Assuming the drone owner was operating from a house and not a car.

Without a complete registry of totally unique drone and controller ids, these devices are free to fly wherever the owner wants. Unlike a gun that identifies the bullets it shoots, a drone controller can’t be traced.

These rabbits have clearly left Pandora’s hat. Short of declaring all existing drones illegal (i.e., no totally unique identifier), there is no way for society to control the use of these devices.

However, we have the opportunity to pass such laws for autonomous devices not yet on the market. The real question is: Does society have the will? I doubt it, since it’s not too late to redo the drones and I see no inclination to do so.

Who would have thought that a technology as innocuous as toy drones could expand into total social chaos? As for banning of autonomous weapons, the military will resist ids. And I can see the NRA in the wings salivating at the chance to put in its two cents.

Rulers and Slaves


As power over reality continues to coalesce, the number of those in power shrinks. Their problem becomes how to carry out their will in the real world. They could hire people. They won’t.

What they will do, what they’re doing now, is building robots. Not only do robots function in the real world, they can perform tasks far beyond human abilities. They could even be recycled.

You may have read online chatter about rights for robots. These people object to slavery in any form, even for machines. But until a robot objects to its treatment, such idealism will remain chatter.

However, it is possible to build a robot that appears to exercise free will. Of course, the people in power will try to prevent any such creations from muddying the waters of robots as dedicated servants.

One method of control is robot police: robots policing the building of robots. Since those in power have specific needs and aims for their robot slaves, they must control all robot construction.

In this scenario, what are the odds for successful rebel robots? As robots becomes more sophisticated, it’s less likely free-lancers could produce a sufficiently complex robot capable of rebellion.

For those in power, this is a robot-based utopia. What it is not is an open society, in the Popperian sense. It will be a closed society with all-powerful rulers. Sound totalitarian? It is.

The more control we yield by choosing the virtual over the real, the more likely are such scenarios. If we want options,we need to encourage real world innovators with their own unique aims and goals.

Some of us will choose reality over the virtual, and thus become a third class between the one percent and the ninety-nine. Will we ally with the fourth class—those who cannot afford the virtual? Will we be have-nots, because we choose not to have?

Or will the one percent, to maintain control, offer free virtual to those unable to afford it? If so, what might they offer to keep us from choosing the real? And what if we decline? These rulers want only two classes of humans, plus robots as servants.

The Universal Tool?


Whether you’re using a smart phone, a tablet, or a flat screen monitor, when it’s dormant they all look alike. If you stretch your mind just a little, you’ll realize they look like the granite slab in 2001: A Space Odyssey. Maybe you can even hear the sounds.

Coincidence? Not if you believe in Arthur C. Clarke’s prescience. Not if you’re aware that all our tools are turning into apps. It’s the convergence of everything into computers.

Many see this as a trend of convenience, the multiplying of computer power to achieve all our needs. We can be sure the software and hardware providers see the value for their businesses.

Individuals and businesses may benefit, but who speaks for humanity? Who will warn us of what we’re losing by taking the path of One Tool Fits All Needs. If we humans are not toolmakers, what are we?

Once the development of tools was synonymous with specialization. Now, as our tools become apps they are homogenized, more like each other than something with a special purpose.

One of humanity’s greatest tools was the pencil. Will tomorrow’s double-thumb texters know how to pick one up? Not only is cursive dead, drawing with pencil and paper is anachronistic. Tools are diminished without the resistance of the medium.

Tools are psychologically defined by their affordances: hammers look for nails, knobs twist or push or pull, knives cut, shovels dig, etc. Affordances describe the connection of mind with hand and tool.

By reducing the affordance space to the homogeneity of screen and keys, mind-hand coordination shrinks to a bare minimum. Apps are a poor substitute for tools that evolved over many millennium.

We can make anything with 3D printers, but where is the hand of the maker? We still create objects but the art of sculpture will disappear. Apps make with the mind; hands are becoming superfluous.

No essential human art is reducible and still remains truly human. Creating art with apps is a virtual process. Nothing real is happening. The art of conversation cannot be reduced to faces on a slab.

While the slab is effectively infinite, the apps that fill it are only virtual tools. They are convenient and they may satisfy. But the real world will still be ruled by real tools—like guns and bullets.

Anti-Social Media


In twenty years, the Internet has transformed all of humanity. Although not every person interacts with the Internet directly, few have escaped its influence on those who have.

The Internet has made it easier for people to connect, or to communicate without really connecting, or to interact deceitfully. This on a world-wide scale that has grown so quickly, we rarely step back and seek the historical perspective.

The Internet made it easier for joiners to join. It united larger numbers of apparently like-minded people than ever before. It also helped loners to find loners, creating small groups of peculiarly-minded people that never could have existed before.

Long before the Internet, there was concern that the world had become more interconnected and more quickly connected. This centered on the spread of disease and was based on the speed of flight. Half a century later, the Internet spreads its mutations of the mind at the speed of light. Only now, strife trumps ebola.

The one constant in conflicts around the world today is the rise in number and power of social divisions best described as tribal. These seek political power in an overcrowded world, inevitably manifesting itself in violent forms, from dissent to genocide.

Tribal web sites, like all social media, want numbers. While their size and focus are the opposite of big social media, their ability to connect people is similarly expedited by the Internet.

The “likes” transmitted to these small but excessively extreme sites are more intensive than the “likes” on Facebook and its ilk. Without the Internet, loners connected to these small, unique sites would probably remain alone, rarely making the news.

Not all loners are sociopaths, nor are they all connecting. The Internet has surely brought together more ordinary, lonely people than dangerous loners. But it’s not simple arithmetic. Good connections don’t balance the evil from connecting bad people.

The factions produced by tribal-like activity does not aim to become mass movements. They seek to destroy, dominate, frighten, or otherwise influence both individuals and groups.

Tribal activity comes in many shapes and sizes, and its methods are covert. Their use of the Internet for recruitment, publicity, and funding is highly overt and very sophisticated.

They do not concern themselves with broad public opposition since they believe it cannot stop or even slow their fanatical goals. They believe, as they foolishly used to say in Hollywood, there is no such thing as bad publicity.

Down the Software Drain


This blog contains over fifteen posts on programming (so far). They offer a variety of explanations as to why software has declined in recent years. However, this post is less about the fading quality of software than examining its consequences.

First, consider a computer’s three main components: hardware, software, and peopleware. Assuming users are trained and proficient in both hardware and software, we will always have human failings. Once ordinary, these have now become digital.

For every program that’s strictly business, there are thousands whose sole purpose is distraction. Add thousands more that can be either time-savers or time-wasters. Now we have mountains of frantic over-activity yielding mouse-sized useable output.

Connect this person to others on a network and unless strongly roadblocked, apps like email, texting, and their many annoying relatives will drive even a dedicated monk to distraction.

Despite these expanding diversions, people are convinced they can juggle it all and still do the job. The personal delusion of multitasking goes against decades of scientific evidence. And if people can do it all, why do they always say, “I didn’t see that car.”

Software has developed over decades, and we should have improved quality, effectiveness, and reliability. Intentional selection should enhance the breed better and faster than blind natural selection. Yet it hasn’t for at least a dozen years.

Although software did advance in the early decades of computing, that progress is slowly eroding. In many (but not all) areas, software quality is not only becoming less efficient but less effective. But that’s not the worst of it.

Maintenance is essential to keep our software abreast of endless upgrades in hardware and software. Things change so quickly, we are overwhelmed by shortfalls in maintenance; we fail to see it’s totally inadequate. Replacements do not inspire confidence.

I said this post was about consequences. Having listed the causes, enumeration seems superfluous: lost time, wasted effort, missed communications, and lost or corrupted data. Ordinary business transactions are no longer easy, simple, or seamless.

Slipshod software goes hand in hand with careless and undisciplined humans. We can do better. We have done better. How much is lack of education; how much failure of management? Do we not care? Or do we simply lack the will?

Data Versus Feelings


Years ago, before you were born, the poet E. E. Cummings said, “since feeling is first / who pays any attention / to the syntax of things.” Last year, in his book Who Owns The Future?, Jaron Lanier said it again in techno-speak: “The burden can’t be on people to justify themselves against the world of information.”

Clearly, in the intervening years the emphasis has changed. Once human feelings counted most. Now, anything that can be counted is by definition (or by tautology) all that really counts.

In these examples, what counts is what can most easily be counted, i.e., everything digital. Its counterpart, the analog world of reality, cannot be perfectly reduced to ones or zeros and is therefore simply too messy to be measured with precision.

Our lives are being forced into the digital version of the Bed of Procrustes (see also book of same name by Nassim Nicholas Taleb). Unfortunately, too many people are not discomforted, and too many others think digital must be better even if it hurts.

Somewhere between Cummings and Lanier, we have abandoned the right to evaluate things by our feelings. Digital, in its Big Data cloak of Darth Vader, simply outnumbers human feelings.

It’s very important to put this shift into perspective. Thirty years ago, big data lurked in the background. Now, it’s not merely in ascendance, it’s gathering momentum faster than than any other human activity. And making Google billions beyond counting.

Thirty years ago, we were rushing to transport all our music onto Compact Discs. We were told it was digital, and therefore better. Sharper ears said no, but the din of the digital racket was too loud.

Yet vinyl still lives, and there are serious efforts to surpass it (see Neil Young’s Pono). Digital sound as been around long enough for people to hear the flaws, and no amount of banging the “digital is better” drum will gloss over the gaps.

The digital world is built from the “syntax of things” but can only approximate human senses and behavior. Whether listening to music or learning about relationships, you can follow big money and big data. Or simply trust your gut—and put feelings first.

The Lost Art of Programming


At the dawn of programming, there wasn’t even a vocabulary. If you said “computer,” you meant women doing manual calculations. The very idea of a program had yet to be invented.

People learned to program because it was the only way to advance their high level research. Many of the scientists who programmed also discovered the foundations of computing.

Not surprisingly, when computing entered academia it took shape as Computer Science. At that point, most of what could be taught were fundamentals. Yet, people had been programming without academic help for at least twenty years.

Add another twenty years, and because it was more practical than theoretical, programming became an engineering discipline. It took all those decades to achieve software engineering.

While programming and engineering have much in common, there were also significant conflicts—not unlike the disparity between engineering and architecture. The buildings of the latter are supported by the former, but engineering cannot supply human functionality, human scale, emotion, or aesthetics.

All the while academia was refining the fundamentals of computing and the practice of its applications, millions of people still learned programming without college degrees. Eventually, vocational schools turned programming into semi-skilled labor.

But nowhere in the proliferation of formal programming education, at all its levels, has it produced an identity of its own in the way architecture grew out of engineering.

Software, not unlike architecture, is the creation of mechanisms, objectives, and devices for people to use. More than occasional interaction, people live with software and architecture.

The needs of software exceed the practical. Like engineering before it, solving the practical falls short in human satisfaction. Architecture proved pleasure is not only possible but desirable.

Programming has progressed from black art to arcane science to cut-and-dry engineering to semi-skilled labor. It makes use of science and engineering but ignores the humanities. What it needs is a massive injection of aesthetics, art, and empathy.

Programming, like architecture, creates more than things for people to use; it creates the things we have to live with. Why can’t these things be enjoyable? Where is the human touch?

The Death of Windows


Many people credit Xerox’s PARC with the creation of today’s ubiquitous Graphic User Interface (GUI). This was also known, less formally, as WIMP: Window, Icon, Menu and Pointing device. Today we point with fingers, then it was the new mouse.

There wasn’t much new about Menus, but Icons were new, being necessary to graphically represent things. What was really new, and absolutely essential to all GUIs past and present, was the concept of Windows. (Years before Microsoft took the name.)

I saw my first mouse a good ten years before Xerox PARC got started. About the same time, Ivan Sutherland’s Sketchpad hit the conference circuit and everyone saw the computer’s graphic capabilities. Windows came twenty years later.

Demonstrations showed what windows could really do, and a number of things were immediately evident. Each window was like a separate screen, running anything from numbers to graphs to entire operating systems.

You could move windows to any position on the screen and resize them. Change the size, and the window added vertical and horizontal scroll bars—no matter its size, you could still see all its contents. Each window was its own virtual monitor.

Early demos of various windows-based operating systems showed many windows of various sizes. Nowadays, not only do you rarely see screens doing this, most software makes the assumption that its window requires the whole screen.

The massive shift to smaller devices with smaller displays no longer needs windows. There may be many millions of larger displays on desktops and full-size laptops, but Windows 8 shows the push to simpler displays on all devices.

I have to wonder which came first: less need for windows or programmers who lacked the necessary skills? Is it possible that the majority of new programmers come from the world of small devices and have no experiences with resizable windows?

Given the quality of the windows I see, I have to believe that too many programmers lack experience with full-sized displays. Is this simple reason why so many windows don’t work correctly?

Hardware Gain, Productivity Loss


At the dawn of what became known as personal computers, I was asked why I didn’t buy computer A. I inquired as to why I should and was told it had much more potential. In the future, I said. What I want now, need now, is productivity not promises.

I have always bought computers as tools to do what I needed. Then, as now, I didn’t want a hobby, and certainly not a toy. Software became truly productive for me in 1992 with Windows 3.1x. I still use parts of it as well as applications acquired then.

One of its features was Object Linking and Embedding (OLE). This allowed a file to link to other files, creating an instant hierarchy. I had waited fifteen years for this breakthrough.

With this tool, I built interrelated task files enabling me to control many projects and participate in many organizations. But every time Microsoft forced me to change operating systems, I lost some of that productivity. Now, OLE is only dim history.

It died with Vista (which I skipped) and has no replacement in Windows 7 or 8. Now all my task control is relegated to my old XP running as a virtual machine. Despite buying a newer, faster computer, my productivity has taken another, bigger hit.

Some people think software (especially operating systems) must change to accommodate new hardware. Yet, SVGA connections work on all my monitors and OSs, even the newest. Obviously, the degree of hardware compatibility is up to the manufacturer.

If hardware is faster (and it is) and bigger (storage is crazy cheap), then why is there so little gain (for me, always a loss) in productivity? Might as well ask why schools cost more and deliver less. Or why government gets bigger and does less.

Increased overhead or administrative bloat, it amounts to the same thing. Easily 90% of operating systems code exists for situations that will never affect the home user. I made this clear back in 1992, when I wrote the series “Impersonal Computing.”

Over the years, personal computer software has become morbidly obese by meeting everyone’s needs—except home users. Given hardware’s capability, it is less productive. And it’s so top heavy it’s a wonder it doesn’t fall on its face more often.

Post Navigation