Digital Minefield

Why The Machines Are Winning

Virtual Shopping

Last week’s post (“Needles and Haystacks”) showed the advantages of searching structured data. These were contrasted with time-wasting searches using brute force computer power.

Such searches not only waste our time, they are incredibly wasteful of computer time and power. If providers of structured data could use just a fraction of this squandered power, how many ways could they enhance our access to their data?

Let’s take a simple example from last week: libraries. What could be achieved by adding a relatively small amount of computing power to online access of a library’s catalog? Close your eyes. Picture yourself at the library versus their website.

The library you’re envisioning need not be confined to the imagination. With a relatively small amount of additional computer power it can be made real, that is virtually real.

If you browsed the library virtually, you could head out now because you found a book you really wanted, or you could take your time (nothing urgent). Same library, enhanced by software.

There are other advantages to virtual libraries. You can browse all the branches in a big countywide system. If your library is a member, you can do the same for all the libraries of the Interlibrary Loan Service. An infinite virtual library.

You may question additional public dollars for libraries, but the same technology can be used for bookstores. Virtual browsing enhances every online retail experience already using databases.

People forget that before computers, it was common to shop virtually—from catalogs. In fact, it was easier to browse those catalogs than today’s crude database websites. Going virtual would make those sites outperform catalogs, and for less money.

In the past twenty years, computing has transformed from a text-based box of limited application to a plethora of graphics-based devices doing everything. It wasn’t simply the increases in speed and storage; it was using the power for unimaginable graphics.

Today, power is being drained by our unstructured searches and social media meandering. All to be monetized by Big Data. Why not add power to improve graphics? Wouldn’t virtual shopping boost our economy? Don’t businesses need it? Don’t you?

Needles and Haystacks

The key to finding things is knowing where they’ve been put. Any kitchen, whether professional, tiny, or average, poses no problem for those preparing meals, locating utensils and ingredients. All kitchens have systems for where things belong.

Shifting backwards only slightly in time, we find another kind of food-related system: the supermarket. Whether high-profile end caps, butcher counters, or any of the specific purpose shelves, we see organization (and know inventory is out of public view.)

What you may not realize is that from any angle, from any moment in time, what you’re looking at is a database. Each item in the market (or in your kitchen) has its name and its place.

Many people think increasing computer power will not only do away with the need for databases but also the brick and mortar stores that use them. Why waste time organizing, they say, when computers find things as fast as we type. But can they really?

If you only have a haystack, it’s a cinch to find a needle using fast, powerful computers. However, needles and haystacks do constitute a database, if only two items. Increase the items a hundredfold, and searching without structure becomes chaotic.

The proponents of the brute force computer approach believe power trumps organization. Why bother with human thought and planning, when we have practically unlimited computer power?

Even if they are right (they’re not), they ignore the other benefits of organization. How often do you buy nothing more than what’s on your shopping list? Other items you buy, for need or impulse, are easily found precisely because of the store’s organization.

The same is true everywhere for retail. Still more valuable are the discoveries we make in libraries. Because the library shelves are organized, we chance upon new authors and titles. Roaming the stacks is the ultimate serendipitous learning experience.

Library books will tell you that humans are smart animals not simply because of the size of their brains but rather how their brains are organized. Evolution improved on raw brute force.

Without the organization of a database, everything is just a jumble of shoes and ships and sealing-wax. With organization, we can easily browse kitchens for a meal, supermarkets for next week’s meals, and libraries for books on eating healthier.

The programs we search the Internet with are called browsers. Without a structure, they cannot browse. Without better search techniques, they can only keep us wandering through the haystack—which is exactly how the browser makers get rich.

Unequal Internet Power

As our digital lives expand, we think less and less about how it all works. The more we use The Cloud, the less we concern ourselves with its details. Whether smart phone or computer, social media or Internet, we take too much for granted.

The digital universe is a complex amalgam of hardware and software, supported by millions of techies from electricians to systems designers. This post focuses on a tiny but essential piece of hardware we’re all using right this second.

You and this blog are connected to the Internet at different locations. At your end is a device similar to the one at the blog’s end. They’re called modems (for reasons no longer relevant).

Modems translate your Internet requests into tiny packets of information that travel (in non-trivial ways) across the world (or across town) to the specific modem linking the Internet to the information you’ve requested.

The modem at your end could be hiding inside your smart phone or sitting atop your computer. The modem at the other end, whether at this blog or somewhere in the bowels of the Google planetiod, is constantly talking to your modem over the Internet.

Your modem is only part of your Internet connection. In addition is your Internet Service Provider (ISP) with its hookups (cable, DSL, 4G). And your modem may have routers for Ethernet and WiFi. Whatever the combination, they all need power.

Some modems run on batteries and some have battery backup. A modem in a smart phone uses its battery. Google modems probably have their own electric company. Many other hundreds of millions of modems rely an AC plug in a wall socket.

These computers depend on the same power we used a hundred years ago. Better protected now, this power is still vulnerable to lightning strikes, terrorists, cars crashing into power poles (it happened here), solar flares, and other vagaries of the universe.

Power simply does not exist everywhere, at all times, and with perfect uniformity. But when it’s interrupted or raised (surges) or lowered (brownouts), it’s much more likely to be at your end than anywhere on the Internet or the big servers you access.

Are our digital lives ascending to the clouds or are we only falling further into the rabbit hole? Either way, when your power goes out you may be sitting in the dark, wondering where everyone went. They’re still there; it’s you who’s disappeared.

Data Versus Feelings

Years ago, before you were born, the poet E. E. Cummings said, “since feeling is first / who pays any attention / to the syntax of things.” Last year, in his book Who Owns The Future?, Jaron Lanier said it again in techno-speak: “The burden can’t be on people to justify themselves against the world of information.”

Clearly, in the intervening years the emphasis has changed. Once human feelings counted most. Now, anything that can be counted is by definition (or by tautology) all that really counts.

In these examples, what counts is what can most easily be counted, i.e., everything digital. Its counterpart, the analog world of reality, cannot be perfectly reduced to ones or zeros and is therefore simply too messy to be measured with precision.

Our lives are being forced into the digital version of the Bed of Procrustes (see also book of same name by Nassim Nicholas Taleb). Unfortunately, too many people are not discomforted, and too many others think digital must be better even if it hurts.

Somewhere between Cummings and Lanier, we have abandoned the right to evaluate things by our feelings. Digital, in its Big Data cloak of Darth Vader, simply outnumbers human feelings.

It’s very important to put this shift into perspective. Thirty years ago, big data lurked in the background. Now, it’s not merely in ascendance, it’s gathering momentum faster than than any other human activity. And making Google billions beyond counting.

Thirty years ago, we were rushing to transport all our music onto Compact Discs. We were told it was digital, and therefore better. Sharper ears said no, but the din of the digital racket was too loud.

Yet vinyl still lives, and there are serious efforts to surpass it (see Neil Young’s Pono). Digital sound as been around long enough for people to hear the flaws, and no amount of banging the “digital is better” drum will gloss over the gaps.

The digital world is built from the “syntax of things” but can only approximate human senses and behavior. Whether listening to music or learning about relationships, you can follow big money and big data. Or simply trust your gut—and put feelings first.

The Lost Art of Programming

At the dawn of programming, there wasn’t even a vocabulary. If you said “computer,” you meant women doing manual calculations. The very idea of a program had yet to be invented.

People learned to program because it was the only way to advance their high level research. Many of the scientists who programmed also discovered the foundations of computing.

Not surprisingly, when computing entered academia it took shape as Computer Science. At that point, most of what could be taught were fundamentals. Yet, people had been programming without academic help for at least twenty years.

Add another twenty years, and because it was more practical than theoretical, programming became an engineering discipline. It took all those decades to achieve software engineering.

While programming and engineering have much in common, there were also significant conflicts—not unlike the disparity between engineering and architecture. The buildings of the latter are supported by the former, but engineering cannot supply human functionality, human scale, emotion, or aesthetics.

All the while academia was refining the fundamentals of computing and the practice of its applications, millions of people still learned programming without college degrees. Eventually, vocational schools turned programming into semi-skilled labor.

But nowhere in the proliferation of formal programming education, at all its levels, has it produced an identity of its own in the way architecture grew out of engineering.

Software, not unlike architecture, is the creation of mechanisms, objectives, and devices for people to use. More than occasional interaction, people live with software and architecture.

The needs of software exceed the practical. Like engineering before it, solving the practical falls short in human satisfaction. Architecture proved pleasure is not only possible but desirable.

Programming has progressed from black art to arcane science to cut-and-dry engineering to semi-skilled labor. It makes use of science and engineering but ignores the humanities. What it needs is a massive injection of aesthetics, art, and empathy.

Programming, like architecture, creates more than things for people to use; it creates the things we have to live with. Why can’t these things be enjoyable? Where is the human touch?

The Death of Windows

Many people credit Xerox’s PARC with the creation of today’s ubiquitous Graphic User Interface (GUI). This was also known, less formally, as WIMP: Window, Icon, Menu and Pointing device. Today we point with fingers, then it was the new mouse.

There wasn’t much new about Menus, but Icons were new, being necessary to graphically represent things. What was really new, and absolutely essential to all GUIs past and present, was the concept of Windows. (Years before Microsoft took the name.)

I saw my first mouse a good ten years before Xerox PARC got started. About the same time, Ivan Sutherland’s Sketchpad hit the conference circuit and everyone saw the computer’s graphic capabilities. Windows came twenty years later.

Demonstrations showed what windows could really do, and a number of things were immediately evident. Each window was like a separate screen, running anything from numbers to graphs to entire operating systems.

You could move windows to any position on the screen and resize them. Change the size, and the window added vertical and horizontal scroll bars—no matter its size, you could still see all its contents. Each window was its own virtual monitor.

Early demos of various windows-based operating systems showed many windows of various sizes. Nowadays, not only do you rarely see screens doing this, most software makes the assumption that its window requires the whole screen.

The massive shift to smaller devices with smaller displays no longer needs windows. There may be many millions of larger displays on desktops and full-size laptops, but Windows 8 shows the push to simpler displays on all devices.

I have to wonder which came first: less need for windows or programmers who lacked the necessary skills? Is it possible that the majority of new programmers come from the world of small devices and have no experiences with resizable windows?

Given the quality of the windows I see, I have to believe that too many programmers lack experience with full-sized displays. Is this simple reason why so many windows don’t work correctly?

Hardware Gain, Productivity Loss

At the dawn of what became known as personal computers, I was asked why I didn’t buy computer A. I inquired as to why I should and was told it had much more potential. In the future, I said. What I want now, need now, is productivity not promises.

I have always bought computers as tools to do what I needed. Then, as now, I didn’t want a hobby, and certainly not a toy. Software became truly productive for me in 1992 with Windows 3.1x. I still use parts of it as well as applications acquired then.

One of its features was Object Linking and Embedding (OLE). This allowed a file to link to other files, creating an instant hierarchy. I had waited fifteen years for this breakthrough.

With this tool, I built interrelated task files enabling me to control many projects and participate in many organizations. But every time Microsoft forced me to change operating systems, I lost some of that productivity. Now, OLE is only dim history.

It died with Vista (which I skipped) and has no replacement in Windows 7 or 8. Now all my task control is relegated to my old XP running as a virtual machine. Despite buying a newer, faster computer, my productivity has taken another, bigger hit.

Some people think software (especially operating systems) must change to accommodate new hardware. Yet, SVGA connections work on all my monitors and OSs, even the newest. Obviously, the degree of hardware compatibility is up to the manufacturer.

If hardware is faster (and it is) and bigger (storage is crazy cheap), then why is there so little gain (for me, always a loss) in productivity? Might as well ask why schools cost more and deliver less. Or why government gets bigger and does less.

Increased overhead or administrative bloat, it amounts to the same thing. Easily 90% of operating systems code exists for situations that will never affect the home user. I made this clear back in 1992, when I wrote the series “Impersonal Computing.”

Over the years, personal computer software has become morbidly obese by meeting everyone’s needs—except home users. Given hardware’s capability, it is less productive. And it’s so top heavy it’s a wonder it doesn’t fall on its face more often.

Bugs Are Not Features

I keep saying a computer can do anything, but I can’t get my new computer to do practically anything I need. Never mind what I’d like it to do, i.e., what I want; I can’t get it to do the essentials of what I must have, i.e., what I truly need.

To prove this, we just have to look at the previous paragraph. I don’t know about you, but I have never seen “i.e.,” with a space after the first period. But the word processor I’ve been given insists “i.e.,” is misspelled without that space! (“E.g.,” also.)

Back in the day, when you encountered programs with inconsistencies you called them “buggy.” If it was your software, your job was to get rid of the bugs: major bugs first, minor bugs last. And bugs found by clients always came first.

If it wasn’t your software, you reported the bugs with enough specificity to help others fix the problem. Sometimes, an accurate description of what the bug was (and where and when and how) kept you busy for days. It was a one day turnaround.

I’m not suggesting a return to those old days, but I sure as hell don’t care for what passes for software nowadays. I just tried to do a page preview and the program crashed! Totally. Suddenly, all the files I was working on were gone. This is acceptable?

My programming experience tells me what to expect from software. When a simple task is far too slow, I yell at it. When a long task doesn’t bother to show me its progress, I yell. There are many reasons I yell because this incompetence is frustrating.

It’s not simply that I expect more because I know more. It’s that most people don’t realize the computer can do anything and if it isn’t doing what they want it’s the programmer’s fault, not theirs.

Does that explain why all this shoddiness is acceptable? Are people so bamboozled by the complexities of programming they think it’s as good as it can be? Do dazzling graphics blind people to the extent they can’t see how clunky the software actually is?

Are our expectations seriously that low? Are we really willing to accept bug-ridden software without complaining? Are we so mystified by the high-priests of programming that we feel unworthy to challenge their efforts? Well, I’m not—it’s buggy.

Mr. GoodByte Rides Again

My second computer column alias was Mr. GoodByte, a name conflating car maintenance and computing. It used automobile analogies to illustrate computer concepts. The wrench still fits.

Yet there is a fundamental difference in how we relate to cars and computers. In the last century, we were known as a car culture. Digital is pervasive in this century but is it a culture?

Unlike the car, digital things take many forms and some are invisible. There is no simple icon to represent all digital objects, whereas a car is still a box with four wheels (color may vary).

The cultural key is how we feel about these objects. People not only love specific cars, they revere them. No one loves any computer. What we love about cars is the hardware. Software is the ghost in the machine: invisible, unknowable, and unlovable.

First were the mainframe computers (very big), and then mini-computers (small in comparison but much larger than their successors, the micro-computers). Now called personal computers, they’re like a car controlled by one person at a time.

Early cars were a hobby, just like early home computers. Then came reliability—and roads, gas stations, and roadside everything. Cars could be fixed and kept running, even antique model T’s. Old computers barely collect dust in a few museums.

Cars were never built to to be upgraded. Many people traded in every three years simply because they wanted a new car. But this was only possible with a large free market for used-cars.

There is no used-computer market. What’s a computer worth after three years, besides scrap? Computers cost little and are disposable; cars cost much more and keep value. We rarely buy new computers because we want to, rather because we have to.

If cars were made to become obsolete like computers, only the rich would buy new. The rest of us will find ways to keep our old cars rolling. Only a mad genius would fix an old computer.

Antiques, restorations, hot rods, low riders, and resto-mods are not just showpieces at car-gatherings of every shape, size, and price all over the US. Owners enjoy, drive, and love them. Such feelings never did and never will exist for personal computers.

Absolutes and Assumptions

In the past, I’ve said the final responsibility for good programming rests with management. True, but it’s time for a different question: Is bad code simply lack of professionalism?

Fifty years ago this summer, I began programming as an intern. That September, I got my first job. I had plenty of quality guidance, but no one ever examined my work in detail.

I don’t think it needed close scrutiny. I like to think that was because I worked hard at becoming a professional. I took the opportunity seriously and constantly tried to learn and improve.

Seeing the quality of work out there now, including major software companies, I wonder if many programmers aspire to professional excellence. Or care. Or even know what it means.

How does this relate to the title of this post? Programmers who use absolute references or make unwarranted assumptions cannot be considered, in my opinion, to be professionals.

Assembly language taught me never to make absolute jumps; i.e., don’t move a fixed number of bytes or instructions. All jumps had to reference labels, a place with a name. These jumps were always valid, even if the intervening instructions changed.

The last post talked of moving to a new operating system. More than half the software I’m installing has windows that don’t fit on the screen—regardless of the resolution. This is also true for operating system components. (Are you listening Microsoft?)

After all these years, I still see messages disappear before they can be read. If it’s important enough to display, then the coder must make sure it can be seen (or stored to be read later).

Bad windows and flash messages reveal wrong assumptions. Microsoft’s Object Linking and Embedding (OBE) still uses absolute paths. We knew better fifty years ago. What happened?

Post Navigation


Get every new post delivered to your Inbox.