Digital Minefield

Why The Machines Are Winning

Unequal Internet Power


As our digital lives expand, we think less and less about how it all works. The more we use The Cloud, the less we concern ourselves with its details. Whether smart phone or computer, social media or Internet, we take too much for granted.

The digital universe is a complex amalgam of hardware and software, supported by millions of techies from electricians to systems designers. This post focuses on a tiny but essential piece of hardware we’re all using right this second.

You and this blog are connected to the Internet at different locations. At your end is a device similar to the one at the blog’s end. They’re called modems (for reasons no longer relevant).

Modems translate your Internet requests into tiny packets of information that travel (in non-trivial ways) across the world (or across town) to the specific modem linking the Internet to the information you’ve requested.

The modem at your end could be hiding inside your smart phone or sitting atop your computer. The modem at the other end, whether at this blog or somewhere in the bowels of the Google planetiod, is constantly talking to your modem over the Internet.

Your modem is only part of your Internet connection. In addition is your Internet Service Provider (ISP) with its hookups (cable, DSL, 4G). And your modem may have routers for Ethernet and WiFi. Whatever the combination, they all need power.

Some modems run on batteries and some have battery backup. A modem in a smart phone uses its battery. Google modems probably have their own electric company. Many other hundreds of millions of modems rely an AC plug in a wall socket.

These computers depend on the same power we used a hundred years ago. Better protected now, this power is still vulnerable to lightning strikes, terrorists, cars crashing into power poles (it happened here), solar flares, and other vagaries of the universe.

Power simply does not exist everywhere, at all times, and with perfect uniformity. But when it’s interrupted or raised (surges) or lowered (brownouts), it’s much more likely to be at your end than anywhere on the Internet or the big servers you access.

Are our digital lives ascending to the clouds or are we only falling further into the rabbit hole? Either way, when your power goes out you may be sitting in the dark, wondering where everyone went. They’re still there; it’s you who’s disappeared.

Data Versus Feelings


Years ago, before you were born, the poet E. E. Cummings said, “since feeling is first / who pays any attention / to the syntax of things.” Last year, in his book Who Owns The Future?, Jaron Lanier said it again in techno-speak: “The burden can’t be on people to justify themselves against the world of information.”

Clearly, in the intervening years the emphasis has changed. Once human feelings counted most. Now, anything that can be counted is by definition (or by tautology) all that really counts.

In these examples, what counts is what can most easily be counted, i.e., everything digital. Its counterpart, the analog world of reality, cannot be perfectly reduced to ones or zeros and is therefore simply too messy to be measured with precision.

Our lives are being forced into the digital version of the Bed of Procrustes (see also book of same name by Nassim Nicholas Taleb). Unfortunately, too many people are not discomforted, and too many others think digital must be better even if it hurts.

Somewhere between Cummings and Lanier, we have abandoned the right to evaluate things by our feelings. Digital, in its Big Data cloak of Darth Vader, simply outnumbers human feelings.

It’s very important to put this shift into perspective. Thirty years ago, big data lurked in the background. Now, it’s not merely in ascendance, it’s gathering momentum faster than than any other human activity. And making Google billions beyond counting.

Thirty years ago, we were rushing to transport all our music onto Compact Discs. We were told it was digital, and therefore better. Sharper ears said no, but the din of the digital racket was too loud.

Yet vinyl still lives, and there are serious efforts to surpass it (see Neil Young’s Pono). Digital sound as been around long enough for people to hear the flaws, and no amount of banging the “digital is better” drum will gloss over the gaps.

The digital world is built from the “syntax of things” but can only approximate human senses and behavior. Whether listening to music or learning about relationships, you can follow big money and big data. Or simply trust your gut—and put feelings first.

The Lost Art of Programming


At the dawn of programming, there wasn’t even a vocabulary. If you said “computer,” you meant women doing manual calculations. The very idea of a program had yet to be invented.

People learned to program because it was the only way to advance their high level research. Many of the scientists who programmed also discovered the foundations of computing.

Not surprisingly, when computing entered academia it took shape as Computer Science. At that point, most of what could be taught were fundamentals. Yet, people had been programming without academic help for at least twenty years.

Add another twenty years, and because it was more practical than theoretical, programming became an engineering discipline. It took all those decades to achieve software engineering.

While programming and engineering have much in common, there were also significant conflicts—not unlike the disparity between engineering and architecture. The buildings of the latter are supported by the former, but engineering cannot supply human functionality, human scale, emotion, or aesthetics.

All the while academia was refining the fundamentals of computing and the practice of its applications, millions of people still learned programming without college degrees. Eventually, vocational schools turned programming into semi-skilled labor.

But nowhere in the proliferation of formal programming education, at all its levels, has it produced an identity of its own in the way architecture grew out of engineering.

Software, not unlike architecture, is the creation of mechanisms, objectives, and devices for people to use. More than occasional interaction, people live with software and architecture.

The needs of software exceed the practical. Like engineering before it, solving the practical falls short in human satisfaction. Architecture proved pleasure is not only possible but desirable.

Programming has progressed from black art to arcane science to cut-and-dry engineering to semi-skilled labor. It makes use of science and engineering but ignores the humanities. What it needs is a massive injection of aesthetics, art, and empathy.

Programming, like architecture, creates more than things for people to use; it creates the things we have to live with. Why can’t these things be enjoyable? Where is the human touch?

The Death of Windows


Many people credit Xerox’s PARC with the creation of today’s ubiquitous Graphic User Interface (GUI). This was also known, less formally, as WIMP: Window, Icon, Menu and Pointing device. Today we point with fingers, then it was the new mouse.

There wasn’t much new about Menus, but Icons were new, being necessary to graphically represent things. What was really new, and absolutely essential to all GUIs past and present, was the concept of Windows. (Years before Microsoft took the name.)

I saw my first mouse a good ten years before Xerox PARC got started. About the same time, Ivan Sutherland’s Sketchpad hit the conference circuit and everyone saw the computer’s graphic capabilities. Windows came twenty years later.

Demonstrations showed what windows could really do, and a number of things were immediately evident. Each window was like a separate screen, running anything from numbers to graphs to entire operating systems.

You could move windows to any position on the screen and resize them. Change the size, and the window added vertical and horizontal scroll bars—no matter its size, you could still see all its contents. Each window was its own virtual monitor.

Early demos of various windows-based operating systems showed many windows of various sizes. Nowadays, not only do you rarely see screens doing this, most software makes the assumption that its window requires the whole screen.

The massive shift to smaller devices with smaller displays no longer needs windows. There may be many millions of larger displays on desktops and full-size laptops, but Windows 8 shows the push to simpler displays on all devices.

I have to wonder which came first: less need for windows or programmers who lacked the necessary skills? Is it possible that the majority of new programmers come from the world of small devices and have no experiences with resizable windows?

Given the quality of the windows I see, I have to believe that too many programmers lack experience with full-sized displays. Is this simple reason why so many windows don’t work correctly?

Hardware Gain, Productivity Loss


At the dawn of what became known as personal computers, I was asked why I didn’t buy computer A. I inquired as to why I should and was told it had much more potential. In the future, I said. What I want now, need now, is productivity not promises.

I have always bought computers as tools to do what I needed. Then, as now, I didn’t want a hobby, and certainly not a toy. Software became truly productive for me in 1992 with Windows 3.1x. I still use parts of it as well as applications acquired then.

One of its features was Object Linking and Embedding (OLE). This allowed a file to link to other files, creating an instant hierarchy. I had waited fifteen years for this breakthrough.

With this tool, I built interrelated task files enabling me to control many projects and participate in many organizations. But every time Microsoft forced me to change operating systems, I lost some of that productivity. Now, OLE is only dim history.

It died with Vista (which I skipped) and has no replacement in Windows 7 or 8. Now all my task control is relegated to my old XP running as a virtual machine. Despite buying a newer, faster computer, my productivity has taken another, bigger hit.

Some people think software (especially operating systems) must change to accommodate new hardware. Yet, SVGA connections work on all my monitors and OSs, even the newest. Obviously, the degree of hardware compatibility is up to the manufacturer.

If hardware is faster (and it is) and bigger (storage is crazy cheap), then why is there so little gain (for me, always a loss) in productivity? Might as well ask why schools cost more and deliver less. Or why government gets bigger and does less.

Increased overhead or administrative bloat, it amounts to the same thing. Easily 90% of operating systems code exists for situations that will never affect the home user. I made this clear back in 1992, when I wrote the series “Impersonal Computing.”

Over the years, personal computer software has become morbidly obese by meeting everyone’s needs—except home users. Given hardware’s capability, it is less productive. And it’s so top heavy it’s a wonder it doesn’t fall on its face more often.

Bugs Are Not Features


I keep saying a computer can do anything, but I can’t get my new computer to do practically anything I need. Never mind what I’d like it to do, i.e., what I want; I can’t get it to do the essentials of what I must have, i.e., what I truly need.

To prove this, we just have to look at the previous paragraph. I don’t know about you, but I have never seen “i.e.,” with a space after the first period. But the word processor I’ve been given insists “i.e.,” is misspelled without that space! (“E.g.,” also.)

Back in the day, when you encountered programs with inconsistencies you called them “buggy.” If it was your software, your job was to get rid of the bugs: major bugs first, minor bugs last. And bugs found by clients always came first.

If it wasn’t your software, you reported the bugs with enough specificity to help others fix the problem. Sometimes, an accurate description of what the bug was (and where and when and how) kept you busy for days. It was a one day turnaround.

I’m not suggesting a return to those old days, but I sure as hell don’t care for what passes for software nowadays. I just tried to do a page preview and the program crashed! Totally. Suddenly, all the files I was working on were gone. This is acceptable?

My programming experience tells me what to expect from software. When a simple task is far too slow, I yell at it. When a long task doesn’t bother to show me its progress, I yell. There are many reasons I yell because this incompetence is frustrating.

It’s not simply that I expect more because I know more. It’s that most people don’t realize the computer can do anything and if it isn’t doing what they want it’s the programmer’s fault, not theirs.

Does that explain why all this shoddiness is acceptable? Are people so bamboozled by the complexities of programming they think it’s as good as it can be? Do dazzling graphics blind people to the extent they can’t see how clunky the software actually is?

Are our expectations seriously that low? Are we really willing to accept bug-ridden software without complaining? Are we so mystified by the high-priests of programming that we feel unworthy to challenge their efforts? Well, I’m not—it’s buggy.

Mr. GoodByte Rides Again


My second computer column alias was Mr. GoodByte, a name conflating car maintenance and computing. It used automobile analogies to illustrate computer concepts. The wrench still fits.

Yet there is a fundamental difference in how we relate to cars and computers. In the last century, we were known as a car culture. Digital is pervasive in this century but is it a culture?

Unlike the car, digital things take many forms and some are invisible. There is no simple icon to represent all digital objects, whereas a car is still a box with four wheels (color may vary).

The cultural key is how we feel about these objects. People not only love specific cars, they revere them. No one loves any computer. What we love about cars is the hardware. Software is the ghost in the machine: invisible, unknowable, and unlovable.

First were the mainframe computers (very big), and then mini-computers (small in comparison but much larger than their successors, the micro-computers). Now called personal computers, they’re like a car controlled by one person at a time.

Early cars were a hobby, just like early home computers. Then came reliability—and roads, gas stations, and roadside everything. Cars could be fixed and kept running, even antique model T’s. Old computers barely collect dust in a few museums.

Cars were never built to to be upgraded. Many people traded in every three years simply because they wanted a new car. But this was only possible with a large free market for used-cars.

There is no used-computer market. What’s a computer worth after three years, besides scrap? Computers cost little and are disposable; cars cost much more and keep value. We rarely buy new computers because we want to, rather because we have to.

If cars were made to become obsolete like computers, only the rich would buy new. The rest of us will find ways to keep our old cars rolling. Only a mad genius would fix an old computer.

Antiques, restorations, hot rods, low riders, and resto-mods are not just showpieces at car-gatherings of every shape, size, and price all over the US. Owners enjoy, drive, and love them. Such feelings never did and never will exist for personal computers.

Absolutes and Assumptions


In the past, I’ve said the final responsibility for good programming rests with management. True, but it’s time for a different question: Is bad code simply lack of professionalism?

Fifty years ago this summer, I began programming as an intern. That September, I got my first job. I had plenty of quality guidance, but no one ever examined my work in detail.

I don’t think it needed close scrutiny. I like to think that was because I worked hard at becoming a professional. I took the opportunity seriously and constantly tried to learn and improve.

Seeing the quality of work out there now, including major software companies, I wonder if many programmers aspire to professional excellence. Or care. Or even know what it means.

How does this relate to the title of this post? Programmers who use absolute references or make unwarranted assumptions cannot be considered, in my opinion, to be professionals.

Assembly language taught me never to make absolute jumps; i.e., don’t move a fixed number of bytes or instructions. All jumps had to reference labels, a place with a name. These jumps were always valid, even if the intervening instructions changed.

The last post talked of moving to a new operating system. More than half the software I’m installing has windows that don’t fit on the screen—regardless of the resolution. This is also true for operating system components. (Are you listening Microsoft?)

After all these years, I still see messages disappear before they can be read. If it’s important enough to display, then the coder must make sure it can be seen (or stored to be read later).

Bad windows and flash messages reveal wrong assumptions. Microsoft’s Object Linking and Embedding (OBE) still uses absolute paths. We knew better fifty years ago. What happened?

The Digital Midden


For the last couple of weeks, I’ve been trying to upgrade my two main computers to new hardware and new operating systems. In the past, such upgrades always reduce productivity because not all my programs survive the move. Can I do better this time?

Maybe, but what has become abundantly clear is that there is no loss of data. As I look for things to transfer, I find far too many files over twenty years old! Not a problem of storage, but definitely a problem to determine if they have value or utility.

Only rarely does a file name (regardless of length) tell me what I need to know to evaluate its importance. File names lack any context of their origin, and human memory is of little help.

Storage capacity has grown exponentially; we think nothing of saving everything. I save every email (and attachment) I read and send. I’ve done this since I began email some 25 years ago.

I have multiple copies of every paper I’ve written, from draft to final presentation or publication. I do the same for all my websites and blogs, whose Internet presence I can only estimate in the neighborhood of three million words. Or maybe four.

My data is important so it’s backed up on two hard drives. But what about all the programs I don’t use? And uninstalled programs? Why keep multiple upgrades for any program?

I’ve put lots of time into organizing data and programs over the years, yet much of it is still obscure. It’s easier to keep the files than to take the time to decide if they should be saved or not.

When you have what feels like an infinite warehouse (about 5 terabytes) that could fit on a single desk, when you can search for any file in minutes, you tend to let things accumulate.

Will the archeologists of the future dig down into the layers of our digital middens? Will they care? Or will all these bits end up being unexplored substrata for future civilizations to build on?

Man Versus Machine


Before I knew anything about computers, I knew a lot about chess. Never a good player, I enjoyed the mental challenge of the game and especially its history and personalities. Like most things I learned as a teenager, I learned chess from books.

Like many chess players I read about, I also had my quirks. Unlike them, I cared less about winning than exploring possibilities. Because of this, I preferred offense to defense: a good way to lose. I thought end games dull, meaning more loses.

Despite these flaws, I enjoyed playing. This changed when I got my first computer chess game, Sargon, in the late 70s. It was a good program and a decent chess player (for my limited skills).

I eventually lost interest because I wanted to play at the end of the day to relax. Naturally, I made mistakes because I was tired. It became annoying because the program never made a mistake.

Psychologically, you may know you can make better moves than your machine opponent but you also know its mechanistic approach will never lapse—even in the slightest. After a while, you realize the only way to win is to be a better machine.

A phrase often heard in many sports says, winning is the ability of one side to impose its will on the other. No team, no person does this without extreme emotional commitment. Playing a machine is never more than practice. Where is the emotion?
`
Sports are shared experiences, which can be life-changing. Computers only acquire data. Competition means nothing to a computer. It doesn’t actually play chess, it simulates playing.

It doesn’t care about winning. It doesn’t even know what it means to win; that’s just what it’s programmed to do. Most certainly, it doesn’t fear losing. It doesn’t need any motivation.

I could go on and on about what computers, even world-class chess playing computers, can’t do. Most importantly, they will never truly know chess. No computer will ever enjoy learning about chess as I did, even with my limited abilities.

No computer can ever do more than the imagination of its programmers allow it to do. A few may be superior to humans at specific tasks, but would they want to leave a burning building?

Post Navigation

Follow

Get every new post delivered to your Inbox.