Digital Minefield

Why The Machines Are Winning

Why Code Is Important


Modern Industrialized nations have based their societies on technology run by software. This code compiles into digital computer instructions. Digital is made of binary switches (or bits) of ones and zeros.

Code, if not written to respond safely to all possible contingencies, may produce a zero when the computer expects a one, or vice versa. This simple error may halt the computer—or worse.

Digital, due to its binary nature, is inherently brittle: it either works or not—there is no sort-of-works. Unlike analog, which degrades gradually over time, digital (if not programmed for all contingencies) may suddenly and unexpectedly stop.

Compare VCRs to DVDs. Tape stretches but it isn’t very noticeable. Tape breaks but can be easily spliced with very little loss of signal. Merely scratching a DVD can make it unplayable. Everything may appear intact, but it just takes one lost bit.

The programs we depend on daily are also brittle but unlikely to lose bits. Or so we think. A sufficient electromagnetic pulse or EMP (think small nuclear device) will destroy those bits unless your machine is “hardened” as are many military computers.

Once upon a time, dividing by zero would stop a computer cold. Since there were too many ways this could occur, the solution was to have the hardware check every division to make sure the divisor was not zero. If it was, the program—not the computer—was halted and an error message displayed.

A program and its data on our hard drives is inaccessible to other programs that write to hard drives. Those programs are constrained by system software, prevented from writing where they shouldn’t.

This solution depends on correct system software. It is not as safe as hardware trapping division by zero. Programs that write where they shouldn’t, are classed as viruses; they know how to get around system software protection.

These are simple examples of potential problems for programmers. Above, I used the word “contingencies” twice. To grasp the extent of possible contingencies, a programmer must be aware of the total environment in which the code must run.

There will be many different computers, each with a unique hardware configuration. Different CPUs means different instruction sets. Many computers have multiple processors (e.g., quad) requiring multi-threaded code.

Code must run on many different operating systems. Even the same operating system is configured differently according to installed updates, hardware drivers, customization, etc.

Then there’s the problem of many unknown programs running simultaneously or interrupting according to their own needs. Or crashing and interfering with the code. Or the intentional disruption of hackers.

It’s easy to see that code running successfully under all these contingencies is far more valuable to society than code that chooses cleverness over safety. Since digital is inherently brittle, code must be robust. Simpler code is easier to secure than complex.

What Is Code?

This is the question asked by the title of the main (38,000 words!) article in the June 11 of Bloomburg’s BusinessWeek. The question is answered thoroughly (38,000 words!) and with patience for the lay reader. But is it the right question?

While it’s useful to know what is code, it’s more important to address the whys of code. Why is code so expensive? Why is code so obtuse? Why is code getting worse? And why is code not the answer?

The question, why is code so expensive? could be better framed as why is the new code so expensive? The answer is because it has so little in common with the old code. The article talks of a newly hired CTO, who inevitably will hire programmers proficient in the new language.

The next question is, why is code so obtuse? It’s as though programmers are a cult believing in the mysticism of the cryptic. Look at the examples in the article and you’ll see a trend over the years towards more concise code. If it’s a game, it was done fifty years ago by Ken Iverson with APL—and done better.

As mentioned in last week’s post, Pride is a bad character trait for programmers. Their goal is to say: I can write that code in one line. (Remember the Quiz Show, “Name That Tune”?)

The preference for cryptic code is a serious problem. For decades it has been estimated that eighty percent (80%) of programming costs is for maintenance. Some other programmer has to modify the code. Unless the laconic language is supplemented by extensive comments, it’s a mind-numbing job (often solved by rewriting whole sections of code).

The question, why is code getting worse? has been partially answered by the responses to the previous two questions. However, there are many more reasons, far too many to enumerate all of them.

The push to newer, more obtuse languages is just one reason. The biggest contributor to worsening code is the increasing inability of programmers (and their managers and project leaders) to see their job as serving users first and foremost.

Clever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.

The question, why is code not the answer? is not simply all of the above. It lies within the much larger picture of the history of programming (of which code and languages are just a part).

Many useful methods have been invented over the decades but fewer and fewer of them are being applied. For example, new software never replaced the old without first extensively testing in the real world.

I can’t say this is never done, but more and more I see instances of software changes put into use without any testing. I’ve even seen bank software being changed as I was using it. There are no excuses for such foolishness in a professional environment.

Programmer Character Traits


The last post listed eight programmer character traits. I’d written about these some thirty years ago. There still seems to be a need to elaborate them, and this seems to be the right time and place.

The positive character traits are Persistence, Problem-Solving, Planning, and Play. It is doubtful that any person completely lacking in any one of these could become a capable programmer.

On Persistence: I have always maintained that the most important muscle used in programming is the one we sit upon. Knowing when to persevere and when to seek help is what gets the job done.

On Problem-Solving: Sometimes we solve problems just for fun, sometimes it can be an obsession. Either way, it must be appropriate to the situation. It is possible to have too much fun finding too clever a solution.

On Planning: Although necessary, it should not expand to use the resources necessary for implementation. Here, too little is as dangerous as too much. The problem is that many people with programming skills may have even better planning skills, which demand to be exercised.

On Play: The freedom of play must not turn into anarchy. Fear of trying cannot keep a good programmer down. But the joy of experimentation must have some boundaries or else the problem at hand is just a game.

The negative character traits are Pride, Pedantry, Perfectionism, and Prejudice. Few of us are completely free of these, but anyone who is ruled by any one of them will never be a good programmer.

On Pride: It keeps us from asking questions. Pride prizes the clever above the practical. Pride is our personal spotlight. If we shine it outwards, it blinds the rest of the world to the fact that we are in the dark. If we shine it on ourselves, we are blinded to what is out there in the dark, the reality beyond our brilliant little circle.

On Pedantry: A pedant is best described as one who cannot learn, not as one who knows. Pedantry is usually seen as forcing your answers upon others. It also can be hiding answers with a knowing nod of the head. Whether concealing or denying doubts about yourself, the result is the same: ignorance.

On Perfectionism: It is the infinite regress of unreachable excellence. Making a program “better” can be a never-ending process, a process that none of us can afford. While we all desire improvement, the action we take must fit our limited resources.

On Prejudice: This is all the assumptions that must be removed before we can see reality. The more that is hidden about a program, the less it will be understood. Assuming that Jones cannot write good code or that Smith never makes a mistake, are obvious prejudices. Prejudgment is the enemy of thought.

More Than Logic


Last week’s post said programming was “More Than Engineering.” I believe there are eight essential programmer character traits. Four are positive: Persistence, Problem-Solving, Planning, and Play. Four are negative: Pride, Pedantry, Perfectionism, and Prejudice.

Beyond character traits, which can be enhanced or suppressed, there are three additional skills that programmers must practice to their utmost. These are Clarity, Empathy, and Imagination.

Clarity is a two-fold goal. The behavior of the program and expected responses from users must be clear and unambiguous. Also, the code itself must be clearly written for any programmer to maintain it.

Messages, especially error messages, are useless if not clear. A recent attempt to copy a file told me only that it could not be copied. Was this a source or a destination problem? I could see the file wasn’t copied, so what was the purpose of the message?

The computer equivalent of putting yourself in another’s shoes is to put yourself at another’s screen and keyboard. Empathy, in programming, is more than a feeling. It’s goal is to anticipate how users will respond to your software—on their machines.

The purpose of multiple windows on a screen was not conceived so each window would take over the whole screen. Yet, that’s what I often see. Graphic User Interfaces are highly customizable, but many programs ignore the possibilities.

My personal pet peeve is windows that ignore the location of the task bar. It can be top, bottom, left, or right, but more than half the programs I use expect it to be at the bottom. (Good if you sit above the screen looking down. I sit below and look up.)

While clarity and empathy are crucial, it takes imagination to create a proper response for every possible situation. Users interact with a total environment, not just this program’s code.

Aside from the purpose of the program, and the myriad things that can go wrong, there is the universe of possible program events—including the illogical. The goal of imagination is to make it more likely these events will be handled safely.

Finally, let’s return to the last character trait: Prejudice. I mean this in the most general sense, i.e., all assumptions are bad. Lack of clarity assumes a message is clear when it’s not. Lack of empathy assumes every user’s machine is just like yours.

Assuming the unlikely will never happen and therefore needs no code to handle it, cripples imagination. Nothing makes code worthless quicker than programming only for the probable.

More Than Engineering


What do great engineering feats have in common with writing software? Let’s look at one of the biggest (many say the greatest engineering achievement of all time): Going to the moon.

Engineering is often called a by-the-book discipline. The engineering books of the day couldn’t get us to the moon; there were far too many significant unknowns. NASA wrote the book as it went along.

There are two ways of handling unknowns. Science, with experimentation, testing, and verification, is one. Going to the moon required massive scientific investigation. No one knew the effects of prolonged weightlessness. (That’s what we call it, but it’s really falling—all the time, non-stop, 24/7.)

The other approach to unknowns requires human ingenuity and intuition. That’s why NASA wanted experienced pilots. As he approached the moon’s surface, Neil Armstrong was running out of fuel as numerous alarms sounded. He made the right choices.

Solving this second class of unknowns is sometimes called a Black Art. If its practitioners succeed more often than not, you can’t call it guesswork. Neil Armstrong was a highly experienced test pilot and he’d faced the unknown many times before.

In the 80s, the powers that be decided programming should become Software Engineering. Unfortunately, it didn’t connect to other engineering disciplines. At the college level, it was more likely found under Computer Science.

So, what is programming? Is it science? Engineering? Or a practical art? Bismark said, “Politics is the art of the possible.” I say, programming is the art of the practical. However, the practical arts don’t have the academic standing of science or engineering.

All engineering is based on scientific principles. All practical arts are based on engineering principles. But the final product comes from the hand of the practitioner, not the engineer or the scientist.

To design a house, an architect obeys the laws of physics, follows engineering guidelines, and uses 3-D computer modeling to feel what it’s like to walk through it. Programmers exercise the software to feel what users will feel.

The NASA astronauts provided that feedback. They took the work of the engineers and the scientists and refined it into workable systems. This is where software is weakest today. It’s time to move beyond Software Engineering.

Auto Autonomous, Part Three


Everyone refers to them as autonomous vehicles. Everyone is wrong. Why? Very simply, they are not autonomous. They are no more autonomous than iRobot’s Roomba vacuum cleaner.

Not everyone has a Roomba, but everyone knows it’s not autonomous. It’s a robot. The company’s website describes it as such, never using the word “autonomous.” What’s the difference?

A robot, says the Merriam-Webster dictionary, works automatically. Another good word might be “automaton,” something that acts as if by its own power. That’s a long way from being truly autonomous.

Actually, in the dictionary, it’s just two definitions away. In between is “automotive.” Then we have “autonomous,” defined as having the power or right to govern itself. Self? What self? These so-called autonomous cars have no more self then a Roomba does.

To emphasize my point, the word “autonomous” comes from the Greek “autonomos”meaning to have its own laws. Whose own? There’s no “who” here, it’s a machine. It’s a “what,” not a “who.”

Unlike that dramatic moment in the original Frankenstein film, no one will cry out, “It’s alive!” when the key is turned. The tissue, the hardware, will remain what it always was—dead.

Obviously, the solution has to be in the software. So, why does AI’s approach to intelligence not follow the only example we have, our own? Why does AI believe in a mythical “pure” intelligence, divorced from body, from emotion, from consciousness, from self?

An individual only becomes human (and intelligent) through the medium of other humans. However, AI prefers intelligence in isolation, as a philosophical ideal. No wonder they keep failing.

One thing for sure, saying these cars are autonomous makes them sound smarter than they really are. Do the promoters want to deceive themselves or us? Either way, they’re not that smart.

Since many really big companies are determined to roll out autonomous cars, I’m sure they will appear in many different forms. Where they’re likely to succeed is as taxicabs in cities.

I can see people using these regularly and still be unwilling to buy one. Unwilling or unable. While it may seem logical to the car makers that cars made by robots should be driven by robots, who’s left with a job to buy the cars?

Auto Autonomous, Part Two


Regarding autonomous vehicles, it seems to me the first question should be: Can they be safer than cars driven by humans? Along with many of you, I think many people are poor drivers.

Most of the bad driving I see is just people ignoring the rules or taking unnecessary risks. Following the rules is the very essence of what computers do best. No doubt automated vehicles could do this not only better than humans, they could do it to perfection.

But what about risks? Most risks, like tailgating, can be reduced by following guidelines for safe driving. Again, for a computer this is just obeying the rules given it. Yet, this is the essential problem of programming.

Can we think of all the possible situations the machine might encounter and supply it with instructions on how to respond? For example, a light rain on asphalt brings up oil slicks and makes the road very slippery.

This is further compounded by over-inflated or worn tires. That’s a lot of data requiring accurate sensors. Finally, the vehicle must weigh all the factors to determine the safest action.

The list of problematic situations is very long, from variations in snow and ice to degrees of visibility. The latter requires judgments as to how visible this vehicle is under different weather and lighting conditions. Is the sun in other driver’s eyes?

There are, however, even more challenging risks in being a driver, computer or human. I could describe them as decision making under extreme uncertainty. I would rather question the premise that computers make decisions in any way like humans.

Human decision making is always deeper than choosing a flavor of ice cream. All human decisions take into account—usually at a very deep non-conscious level—our survival. Choosing an ice cream could involve health (calories) and even relationships.

What comes naturally to us is precisely what’s most difficult to program into a computer. AI ignores the concept of self, preferring to see intelligence as something abstract, i.e., beyond the need of a self.

A computer doesn’t know what risk means because nothing has meaning. How could it without involvement, without caring? The machine has no skin in the game. If it fails disastrously, is destroyed, it couldn’t care less. Hell, it can’t care at all.

The driver of a car not only wants to avoid injury (and damage to the car) but also to protect any passengers, especially children. Without these concerns, how can autonomous vehicles be trusted to make decisions that might mean life or death?

Auto Autonomous, Part One

Strange week. All kinds of items related to autonomous machines appeared from many different sources. Some were cars, some were trucks, and some were even weapons. Along with stories about super-intelligent computers, it was a chilling week.

First was a tiny link in my AAA magazine about the history of autonomous vehicles. For example, “1939: GM’s World’s Fair exhibit predicts driverless cars will be traveling along automated highways by 1960.”

The link also had this entry, “2035: By this date, experts predict 75 percent of cars on roadways will be autonomous.” Near by in the magazine was an article on the latest muscle car. Wonder how those will get along with autonomous cars.

On PBS this week, I learned about autonomous trucks and weapons (two separate stories). Driver-less semis are scary enough, without thinking about weapons deciding who’s a target.

I apologize if this is too much information, but I have more. In a word: taxicabs. Autonomous vehicles that will pick you up and deliver you to your destination. Didn’t we see that in the first Total Recall movie? After hearing about trucks and weapons, sounds very reasonable, doesn’t it?

What’s not reasonable is the talk about super-intelligent machines. It’s not coming from the people who want you to be passive passengers. No, it’s coming from those who can’t wait to worship the machine.

This attitude is rarely found among those studying artificial intelligence (AI) or those who are working to implement it. Rather, it comes from philosophers, pundits, and self-proclaimed futurists who know a little about AI and less about computers.

Led by Ray Kurzweil of Singularity fame, these predictions are based on a single insight known as Moore’s Law. It says the number of transistors on a chip (integrated circuit) doubles every two years. Ray, et al, claim this means computers are becoming exponentially more powerful.

They fail to comprehend the Law only applies to the hardware side of computers. Software is another kettle of badly-cooked fish. No one is foolish enough to suggest software is similarly improving.

Don’t take my word for the state of AI. Listen to an actual AI expert. Here’s the TED talk of Dr. Fei-Fei Li — Director of the Stanford Artificial Intelligence Lab.

Clouds and Grasshoppers


Last week’s post tried to shed some light on how much we don’t know about Clouds. It was a revelation for many and a shock for some. Just yesterday, one friend asked, “What’s new?”

So I showed him what I had seen just the day before: the CuBox-i, a two-inch cube PC computer that runs Android and Linux. Starting at $45, you can add all the way up to a keyboard and monitor to get a full desktop computer.

Two things. At 8 cubic inches, this is not the smallest computer out there. Many are not much bigger than a flash drive. Also, this cube is not that new; it’s the second generation of the device.

I’m sure you’re aware of the computers inside your tablets and smart phones. These smaller computers actually began with netbooks (I still have mine). Well, the computers in those devices have become—surprise!—smaller and more powerful.

I was only vaguely aware of this trend and didn’t discover the extent of it until last week. One reason is no one is really sure what to call these little demons. Many say mini PC, but how is a mini PC smaller than the original micro-computer PC?

Some say tiny, because that’s more descriptive. However, without a common label how and where do you go to learn about them? One thing is for sure. You won’t find them in the big box stores.

Computer magazines at newsstands used to be a good source for new technology, both announced and advertised. No more. How many newsstands can you find? How many computer magazines?

Like everything else, it’s all online. If you can find it. (For these newer, littler guys you might try Laptop Magazine.) To help, I’ve decided to call them grasshoppers. Why? Because the first one I saw actually reminded me of a grasshopper. In Florida, I’ve seen them this big. So why not?

The real, more serious question is what are people doing with them? The phrase I keep seeing is “TV Box.” As to exactly what that is, I can only guess. Something to do with streaming media, I suppose.

That capability is the “why” of this post. At 32Gb of storage these grasshoppers will be using the Cloud. Sure you can hang a terabyte of storage on its USB connector but you’re quadrupling its bulk.

The Cloud may be selling storage and remote computing but it’s the perfect source for streaming to millions of grasshoppers. Of course, a virus might organize all these devices to stream at once, sucking The Cloud dry like a plague of locusts.

Life In The Cloud


Seeing the ads on TV (and every other ad-infested medium), you’d think companies like Microsoft want us to move all our computing to The Cloud. (Actually, to Their Cloud.) As if we weren’t already there.

If you spend more time streaming than downloading, you’re already living in The Cloud. If what you’re looking at on your desktop, laptop, smart phone, etc., isn’t stored on that device, then you’re already living in The Cloud.

If what you’re seeing comes from somewhere on the Internet, then you’ve bought into The Cloud. “Wait a Googley-minute,” I hear you objecting. “Where else would I find things?”

Well, once upon a time we created things ourselves and sent them to one another. That was before we had access to the Internet, and through the Internet to one another. Instead of individuals and institutions loosely connected, the Internet became another Big Medium dominated by Big Players.

“So what’s the big deal about The Cloud?” you ask. Think of it in terms of real estate. You know the mantra: location, location, location. In this case, the real estate is all that memory and storage sitting on all those personal devices we use. That’s the real estate we own.

Now look at the big Cloud players. A current list of the top 100 shows Microsoft at 23 and even the oh-so-huge Google only at number 12. While Amazon may be number one, most of the other names in this list are unrecognizable to me and you.

And they want you. OK, not so much you, as millions of you’s, to use their real estate for the things you want to do with your devices. Of course, in this game, you—even millions of you’s—are small potatoes. What the big Cloud players want is millions of organizations to put their data and computing into The Cloud.

Maybe, I’d better reword that. The big Cloud players want businesses, nonprofits, NGOs, and even governments to buy real estate in their Cloud. Amend that: buy or rent. Never ignore rentals.

Where does location come into this game? Isn’t it obvious? The more traffic coming in and out of any big Cloud player’s location, the more valuable the location. Or at least that’s the smoke they’re blowing.

As it is in real real estate, the big Cloud players don’t have to own what they’re peddling. They can be middle-men, wheeling and dealing, pushing their locations as valuable properties.

But the same questions about real real estate still apply. Does this location have sufficient infrastructure? Are the services reliable? Are you being locked in by the high cost of moving? Is the location secure? Will your stuff be safe?

Post Navigation

Follow

Get every new post delivered to your Inbox.

Join 35 other followers