Digital Minefield

Why The Machines Are Winning

Bigger Brothers


Some twenty-five years ago, I was a partner is a web design business. Our big pitch was not only would your website represent your store 24/7/365, you could track where in the store people went. We could even tell you how long they looked at each page. It was like virtual footprints in your virtual store.

We had no idea how this idea would catch on. It’s more than Big Data tracking everything on the Internet. They compare every track and know which are yours and which are mine.

They also know who we are, where we live, and where we are right now. But there’s more: they also buy data about us from the other big collectors of our data. Who are these firms? Read on.

Data doesn’t only come from online; it can come from the myriad of forms we fill out. You can rent a car online or over the counter at the rental agency. If you do, there’s a form. Think of all the forms every time you see a doctor or go to a hospital.

The rental agency doesn’t necessarily do business directly with Big Data, neither does the hospital. Or anyone else who has your personal information. No, other companies buy and sell our data.

And you’ve never heard of any of them, e.g., C&C is the largest data collection agency in North America. Whether they do Data Collection, Data Science, Data Mining, Market Research, or Field Intelligence, they’re all Data Brokers (See 60 Minutes).

These companies don’t do business with you. They do business about you, whatever they can monetize. The last thing they want is for you to see what data they have and whom they sell it to.

Before you go screaming to your congressperson, remember these brokers are only middlemen (middlepersons?). Who’s buying this information besides Big Data, e,g., government?

Is it legal? Well, somewhere on that form or the website or the software you’re using there’s some legalese that says it is. Of course, no one ever reads the unreadable legal mumbo-jumbo buried in what we sign (or the EULAs we click on to agree).

So until there’s a big enough law suit to get to the Supreme Court and a decision, we’re stuck with companies vacuuming up all our data and selling it to whomever for whatever purpose.

Who are the Biggest Brothers? Those who collect and broker our data or those who buy it from them? The first keep adding to your profile to keep selling it. The second keep buying it, from many sources. They have enough data to predict your behavior.

The data buyers know where you’re likely to go and what you’re likely to buy—and how much they can get you to spend. They know what you owe and how much your willing to owe. What they won’t do is leave you alone. Your data makes them rich.

Programming’s Three Tasks


Recently, an old programming friend in Florida sent me a link to an online book about JavaScript. I don’t do much with Java, but it did get me thinking about all those books on how to program.

There are scores of such books (particularly now that we use so many languages), but there are also a great many books on how to program User Interfaces. That is, how to make the user’s interaction with the program transparent (some say intuitive).

Unfortunately, there’s a disconnect in the code created by these two approaches. The concerns of good style don’t overlap those of good user interfaces. Should users even care about style?

Non-programmers don’t realize that even the simplest program can be written a million different ways. Of these, a hundred are probably flawless. Of the hundred, a dozen could be perfect in every aspect of their construction and execution. There is no best.

However, perfect code can still be opaque to the user. Clearly, the answer is to write clean code, then make it easy to use. Finally, improve the style without changing its functionality.

Regrettably, the programmer’s job is still not done. The concerns of style and user ignore the future of “finished” software. This is maintenance—and its usually eighty percent of the total effort.

When the first completed version (1.0) is released to the world, the responsibilities of maintenance begin. Whether it’s quick fixes, like typos, or major revisions and upgrades, the job usually goes to programmers who did not develop the program.

So, even after writing code that is kind to users and has excellent style, there are still the needs of the maintenance programmers. To meet these, software developers must write readable code.

In a recent search, I found exactly three books on writing readable code. Still, that’s three more than existed when I wrote a paper on this topic some twenty-five years ago.

Without readable code, maintenance is more than difficult, it’s nigh impossible. But readability cannot be another step, nor an afterthought. Doing it as you write aligns code with concepts.

Six months after a program is complete, you may be the one who has to fix it. After that long, you may be a stranger to your own code. If you made it readable, you will appreciate the effort.

Virtual Shopping


Last week’s post (“Needles and Haystacks”) showed the advantages of searching structured data. These were contrasted with time-wasting searches using brute force computer power.

Such searches not only waste our time, they are incredibly wasteful of computer time and power. If providers of structured data could use just a fraction of this squandered power, how many ways could they enhance our access to their data?

Let’s take a simple example from last week: libraries. What could be achieved by adding a relatively small amount of computing power to online access of a library’s catalog? Close your eyes. Picture yourself at the library versus their website.

The library you’re envisioning need not be confined to the imagination. With a relatively small amount of additional computer power it can be made real, that is virtually real.

If you browsed the library virtually, you could head out now because you found a book you really wanted, or you could take your time (nothing urgent). Same library, enhanced by software.

There are other advantages to virtual libraries. You can browse all the branches in a big countywide system. If your library is a member, you can do the same for all the libraries of the Interlibrary Loan Service. An infinite virtual library.

You may question additional public dollars for libraries, but the same technology can be used for bookstores. Virtual browsing enhances every online retail experience already using databases.

People forget that before computers, it was common to shop virtually—from catalogs. In fact, it was easier to browse those catalogs than today’s crude database websites. Going virtual would make those sites outperform catalogs, and for less money.

In the past twenty years, computing has transformed from a text-based box of limited application to a plethora of graphics-based devices doing everything. It wasn’t simply the increases in speed and storage; it was using the power for unimaginable graphics.

Today, power is being drained by our unstructured searches and social media meandering. All to be monetized by Big Data. Why not add power to improve graphics? Wouldn’t virtual shopping boost our economy? Don’t businesses need it? Don’t you?

Needles and Haystacks


The key to finding things is knowing where they’ve been put. Any kitchen, whether professional, tiny, or average, poses no problem for those preparing meals, locating utensils and ingredients. All kitchens have systems for where things belong.

Shifting backwards only slightly in time, we find another kind of food-related system: the supermarket. Whether high-profile end caps, butcher counters, or any of the specific purpose shelves, we see organization (and know inventory is out of public view.)

What you may not realize is that from any angle, from any moment in time, what you’re looking at is a database. Each item in the market (or in your kitchen) has its name and its place.

Many people think increasing computer power will not only do away with the need for databases but also the brick and mortar stores that use them. Why waste time organizing, they say, when computers find things as fast as we type. But can they really?

If you only have a haystack, it’s a cinch to find a needle using fast, powerful computers. However, needles and haystacks do constitute a database, if only two items. Increase the items a hundredfold, and searching without structure becomes chaotic.

The proponents of the brute force computer approach believe power trumps organization. Why bother with human thought and planning, when we have practically unlimited computer power?

Even if they are right (they’re not), they ignore the other benefits of organization. How often do you buy nothing more than what’s on your shopping list? Other items you buy, for need or impulse, are easily found precisely because of the store’s organization.

The same is true everywhere for retail. Still more valuable are the discoveries we make in libraries. Because the library shelves are organized, we chance upon new authors and titles. Roaming the stacks is the ultimate serendipitous learning experience.

Library books will tell you that humans are smart animals not simply because of the size of their brains but rather how their brains are organized. Evolution improved on raw brute force.

Without the organization of a database, everything is just a jumble of shoes and ships and sealing-wax. With organization, we can easily browse kitchens for a meal, supermarkets for next week’s meals, and libraries for books on eating healthier.

The programs we search the Internet with are called browsers. Without a structure, they cannot browse. Without better search techniques, they can only keep us wandering through the haystack—which is exactly how the browser makers get rich.

Unequal Internet Power


As our digital lives expand, we think less and less about how it all works. The more we use The Cloud, the less we concern ourselves with its details. Whether smart phone or computer, social media or Internet, we take too much for granted.

The digital universe is a complex amalgam of hardware and software, supported by millions of techies from electricians to systems designers. This post focuses on a tiny but essential piece of hardware we’re all using right this second.

You and this blog are connected to the Internet at different locations. At your end is a device similar to the one at the blog’s end. They’re called modems (for reasons no longer relevant).

Modems translate your Internet requests into tiny packets of information that travel (in non-trivial ways) across the world (or across town) to the specific modem linking the Internet to the information you’ve requested.

The modem at your end could be hiding inside your smart phone or sitting atop your computer. The modem at the other end, whether at this blog or somewhere in the bowels of the Google planetiod, is constantly talking to your modem over the Internet.

Your modem is only part of your Internet connection. In addition is your Internet Service Provider (ISP) with its hookups (cable, DSL, 4G). And your modem may have routers for Ethernet and WiFi. Whatever the combination, they all need power.

Some modems run on batteries and some have battery backup. A modem in a smart phone uses its battery. Google modems probably have their own electric company. Many other hundreds of millions of modems rely an AC plug in a wall socket.

These computers depend on the same power we used a hundred years ago. Better protected now, this power is still vulnerable to lightning strikes, terrorists, cars crashing into power poles (it happened here), solar flares, and other vagaries of the universe.

Power simply does not exist everywhere, at all times, and with perfect uniformity. But when it’s interrupted or raised (surges) or lowered (brownouts), it’s much more likely to be at your end than anywhere on the Internet or the big servers you access.

Are our digital lives ascending to the clouds or are we only falling further into the rabbit hole? Either way, when your power goes out you may be sitting in the dark, wondering where everyone went. They’re still there; it’s you who’s disappeared.

Data Versus Feelings


Years ago, before you were born, the poet E. E. Cummings said, “since feeling is first / who pays any attention / to the syntax of things.” Last year, in his book Who Owns The Future?, Jaron Lanier said it again in techno-speak: “The burden can’t be on people to justify themselves against the world of information.”

Clearly, in the intervening years the emphasis has changed. Once human feelings counted most. Now, anything that can be counted is by definition (or by tautology) all that really counts.

In these examples, what counts is what can most easily be counted, i.e., everything digital. Its counterpart, the analog world of reality, cannot be perfectly reduced to ones or zeros and is therefore simply too messy to be measured with precision.

Our lives are being forced into the digital version of the Bed of Procrustes (see also book of same name by Nassim Nicholas Taleb). Unfortunately, too many people are not discomforted, and too many others think digital must be better even if it hurts.

Somewhere between Cummings and Lanier, we have abandoned the right to evaluate things by our feelings. Digital, in its Big Data cloak of Darth Vader, simply outnumbers human feelings.

It’s very important to put this shift into perspective. Thirty years ago, big data lurked in the background. Now, it’s not merely in ascendance, it’s gathering momentum faster than than any other human activity. And making Google billions beyond counting.

Thirty years ago, we were rushing to transport all our music onto Compact Discs. We were told it was digital, and therefore better. Sharper ears said no, but the din of the digital racket was too loud.

Yet vinyl still lives, and there are serious efforts to surpass it (see Neil Young’s Pono). Digital sound as been around long enough for people to hear the flaws, and no amount of banging the “digital is better” drum will gloss over the gaps.

The digital world is built from the “syntax of things” but can only approximate human senses and behavior. Whether listening to music or learning about relationships, you can follow big money and big data. Or simply trust your gut—and put feelings first.

The Lost Art of Programming


At the dawn of programming, there wasn’t even a vocabulary. If you said “computer,” you meant women doing manual calculations. The very idea of a program had yet to be invented.

People learned to program because it was the only way to advance their high level research. Many of the scientists who programmed also discovered the foundations of computing.

Not surprisingly, when computing entered academia it took shape as Computer Science. At that point, most of what could be taught were fundamentals. Yet, people had been programming without academic help for at least twenty years.

Add another twenty years, and because it was more practical than theoretical, programming became an engineering discipline. It took all those decades to achieve software engineering.

While programming and engineering have much in common, there were also significant conflicts—not unlike the disparity between engineering and architecture. The buildings of the latter are supported by the former, but engineering cannot supply human functionality, human scale, emotion, or aesthetics.

All the while academia was refining the fundamentals of computing and the practice of its applications, millions of people still learned programming without college degrees. Eventually, vocational schools turned programming into semi-skilled labor.

But nowhere in the proliferation of formal programming education, at all its levels, has it produced an identity of its own in the way architecture grew out of engineering.

Software, not unlike architecture, is the creation of mechanisms, objectives, and devices for people to use. More than occasional interaction, people live with software and architecture.

The needs of software exceed the practical. Like engineering before it, solving the practical falls short in human satisfaction. Architecture proved pleasure is not only possible but desirable.

Programming has progressed from black art to arcane science to cut-and-dry engineering to semi-skilled labor. It makes use of science and engineering but ignores the humanities. What it needs is a massive injection of aesthetics, art, and empathy.

Programming, like architecture, creates more than things for people to use; it creates the things we have to live with. Why can’t these things be enjoyable? Where is the human touch?

The Death of Windows


Many people credit Xerox’s PARC with the creation of today’s ubiquitous Graphic User Interface (GUI). This was also known, less formally, as WIMP: Window, Icon, Menu and Pointing device. Today we point with fingers, then it was the new mouse.

There wasn’t much new about Menus, but Icons were new, being necessary to graphically represent things. What was really new, and absolutely essential to all GUIs past and present, was the concept of Windows. (Years before Microsoft took the name.)

I saw my first mouse a good ten years before Xerox PARC got started. About the same time, Ivan Sutherland’s Sketchpad hit the conference circuit and everyone saw the computer’s graphic capabilities. Windows came twenty years later.

Demonstrations showed what windows could really do, and a number of things were immediately evident. Each window was like a separate screen, running anything from numbers to graphs to entire operating systems.

You could move windows to any position on the screen and resize them. Change the size, and the window added vertical and horizontal scroll bars—no matter its size, you could still see all its contents. Each window was its own virtual monitor.

Early demos of various windows-based operating systems showed many windows of various sizes. Nowadays, not only do you rarely see screens doing this, most software makes the assumption that its window requires the whole screen.

The massive shift to smaller devices with smaller displays no longer needs windows. There may be many millions of larger displays on desktops and full-size laptops, but Windows 8 shows the push to simpler displays on all devices.

I have to wonder which came first: less need for windows or programmers who lacked the necessary skills? Is it possible that the majority of new programmers come from the world of small devices and have no experiences with resizable windows?

Given the quality of the windows I see, I have to believe that too many programmers lack experience with full-sized displays. Is this simple reason why so many windows don’t work correctly?

Hardware Gain, Productivity Loss


At the dawn of what became known as personal computers, I was asked why I didn’t buy computer A. I inquired as to why I should and was told it had much more potential. In the future, I said. What I want now, need now, is productivity not promises.

I have always bought computers as tools to do what I needed. Then, as now, I didn’t want a hobby, and certainly not a toy. Software became truly productive for me in 1992 with Windows 3.1x. I still use parts of it as well as applications acquired then.

One of its features was Object Linking and Embedding (OLE). This allowed a file to link to other files, creating an instant hierarchy. I had waited fifteen years for this breakthrough.

With this tool, I built interrelated task files enabling me to control many projects and participate in many organizations. But every time Microsoft forced me to change operating systems, I lost some of that productivity. Now, OLE is only dim history.

It died with Vista (which I skipped) and has no replacement in Windows 7 or 8. Now all my task control is relegated to my old XP running as a virtual machine. Despite buying a newer, faster computer, my productivity has taken another, bigger hit.

Some people think software (especially operating systems) must change to accommodate new hardware. Yet, SVGA connections work on all my monitors and OSs, even the newest. Obviously, the degree of hardware compatibility is up to the manufacturer.

If hardware is faster (and it is) and bigger (storage is crazy cheap), then why is there so little gain (for me, always a loss) in productivity? Might as well ask why schools cost more and deliver less. Or why government gets bigger and does less.

Increased overhead or administrative bloat, it amounts to the same thing. Easily 90% of operating systems code exists for situations that will never affect the home user. I made this clear back in 1992, when I wrote the series “Impersonal Computing.”

Over the years, personal computer software has become morbidly obese by meeting everyone’s needs—except home users. Given hardware’s capability, it is less productive. And it’s so top heavy it’s a wonder it doesn’t fall on its face more often.

Bugs Are Not Features


I keep saying a computer can do anything, but I can’t get my new computer to do practically anything I need. Never mind what I’d like it to do, i.e., what I want; I can’t get it to do the essentials of what I must have, i.e., what I truly need.

To prove this, we just have to look at the previous paragraph. I don’t know about you, but I have never seen “i.e.,” with a space after the first period. But the word processor I’ve been given insists “i.e.,” is misspelled without that space! (“E.g.,” also.)

Back in the day, when you encountered programs with inconsistencies you called them “buggy.” If it was your software, your job was to get rid of the bugs: major bugs first, minor bugs last. And bugs found by clients always came first.

If it wasn’t your software, you reported the bugs with enough specificity to help others fix the problem. Sometimes, an accurate description of what the bug was (and where and when and how) kept you busy for days. It was a one day turnaround.

I’m not suggesting a return to those old days, but I sure as hell don’t care for what passes for software nowadays. I just tried to do a page preview and the program crashed! Totally. Suddenly, all the files I was working on were gone. This is acceptable?

My programming experience tells me what to expect from software. When a simple task is far too slow, I yell at it. When a long task doesn’t bother to show me its progress, I yell. There are many reasons I yell because this incompetence is frustrating.

It’s not simply that I expect more because I know more. It’s that most people don’t realize the computer can do anything and if it isn’t doing what they want it’s the programmer’s fault, not theirs.

Does that explain why all this shoddiness is acceptable? Are people so bamboozled by the complexities of programming they think it’s as good as it can be? Do dazzling graphics blind people to the extent they can’t see how clunky the software actually is?

Are our expectations seriously that low? Are we really willing to accept bug-ridden software without complaining? Are we so mystified by the high-priests of programming that we feel unworthy to challenge their efforts? Well, I’m not—it’s buggy.

Post Navigation

Follow

Get every new post delivered to your Inbox.