Digital Minefield

Why The Machines Are Winning

Worst Idea Ever


I assume by now you’ve heard about the ban on AI weapons proposed in a letter signed by over 1000 AI experts, including Elon Musk, Steve Wozniak , and Stephen Hawking. The letter was presented last week at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The rest of us have been occupied with threats of various kinds of autonomous vehicles (cars, trucks, drones), and we failed to see what was right around the corner. Autonomous weapons are more than battlefield robot-soldiers (unlikely) or gun-toting drones (likely). And weapons are more than guns.

Robots have no qualms about sacrificing themselves. It’s a different kind of warfare when the weapons deliverer is the weapon. It’s Kamikaze on steroids.

Only now do I realize, after writing a dozen posts about autonomous autos, that they and their ilk are merely stalking horses for the worst idea evcr. Autos were just the beginning. Add long haul trucks, obviously, because they’re already being tested.

How about local trucks? Post Office? UPS? How about city buses? School buses? Don’t forget Amazon’s drones. Can larger autonomous drones replace helicopters? For news? Police surveillance? Medivacs?

Look at it another way. Instead of using robots to replace humans, autonomous vehicles are replacing the entire job, e.g., driving a truck. The idea behind all these autonomous devices is to get the public used to the concept, so they won’t question any of them, even those that go way over the top.

Aside from giving the people in control more control using the leverage of computers, there’s the general degradation of the populace by making them less valued than the robots that replace them.

How did humans come to this insane position? Here’s how. People who control the machines think not only are they so much smarter than other people (e.g., the ones they want to replace with robots), they think they can make computers smarter than other people. This is the AI they seek.

And there are some so enamored of intelligence in any form that if they succeed at making a superhuman artificial intelligence—one even smarter than themselves—they Will bow down and worship it. Even as it destroys them.

Optimizing Windows


Optimizing windows is a bad idea. Not Windows the operating system (although I could give good reasons for that), but the concept of windows employed by every graphic user interface (GUI).

I’ve used the term “optimize” a lot in recent posts. So much that I began to wonder if it’s the right word. Compare this definition, “to make as perfect, effective, or functional as possible” to this one, “to make the most of.” The first was for “optimize” but the second was for “maximize.” Not much difference.

In WindowsSpeak, to “Maximize” a window means to enlarge it to fill the display. To fill a window with content is to optimize it’s space on screen. Or you could say this was maximizing the content.

Two points: One, the intent of early GUI designers was to have many windows on screen. Two, every bad web page designer wants his window to fill the screen and to fill that window with content. Why bother with windows; just call them screens.

The idea of a windows-based GUI (or WIMP: window, icon, menu and pointing device) began at Xerox PARC (Palo Alto Research Center) in 1973. Apple’s Mac system showed up in 1984, followed by Microsoft’s Windows the next year.

There have been a score of GUIs. Many, like the Unix-based X Windows, were far superior graphically. At the time, the displays on our desktops could not compete with those of main-frame terminals.

Since they lacked the real estate and the resolution, early PC programmers needed bigger windows (i.e., more of the screen). Now our screens are easily the equal of those earlier terminals in resolution and size. But PC programmers, especially web page designers, still—unnecessarily—want it all.

It’s like a war: Every window (program) for itself. And all clamoring for your attention. Almost every morning when I boot up some program wants my immediate attention. As if what I might want to do could not possibly be more important.

At one end their arsenal has endless upgrades. At the other end are endlessly annoying pop-ups. (How long have we been trying to kill pop-ups? I forget.) No program can win this war. All this conflict achieves is ever-worsening computer experiences.

To ask some programmers not to optimize is an affront to their egos. Yet, optimizing desktop web pages is why other programmers must create entirely separate and independent mobile web pages. This takes twice the effort (and more than twice the cost), but don’t ever ask ego-driven programmers to settle for less than all the pixels.

Women Programmers


In an era short of capable programmers (this one), we often hear the question, Why aren’t there more women programmers? Having observed major changes in the industry in recent years, I think a better question would be, Why are there so many male programmers?

In my post of June 15, I described programmer character traits. I think if we examine these as to gender, we can see the negative effects of intellectually capable but socially awkward nerds.

For example, nerds overdo the positive character trait of Problem-Solving. Brain teasers can be fun, but too much becomes an anti-social obsession. Any positive character trait carried to extremes can become a negative.

Persistence is necessary, but when focus on a problem becomes so narrow that a programmer can’t admit he needs help, persistence is no longer positive. (I use the masculine pronoun because far more men have this failing.)

Another thing that defines nerds is a foolish pride in their own cleverness. As I said in the post of June 22, “[c]lever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.”

In the post of June 8, I pointed out programmers need more than logic. They need clarity, empathy, and imagination. Unlike most women, nerds are unaware of the feelings of others. They are apathetic not empathetic.

Women programmers care about the users of their code. Nerds only want to impress them. The immaturity of nerds creates a need to feel superior to users. Women care about the user’s experience.

Our society has become so dependent on computers, we’re willing to tolerate the social inept as long as we can pick their brains. This is cute when sub-teens help a clueless adult figure out email. It’s something else when it’s the prevailing climate of the programming workplace.

By focusing on code instead of users, computing has become dependent on nerds. Our attitude towards technology is now the nerd’s attitude. Social media has replaced face-to-face interaction because nerds find social risk uncomfortable. Movies are no longer character or plot driven—they are CGI driven. Nerds seek style over substance. And so it is in programming.

Because most aren’t nerds, there are too few women in programming. Nerds dictate the climate. They don’t need to be in charge or make the decisions, because they define the choices for management. The nerds have won.

How To Code

If “What Is Code?” is not the answer for suffering VPs and SVPs, what should the answer look like? How about “How to”? Or have we forgotten the effectiveness of learning by doing?

People without any programming training can produce acceptable web pages. In fact, the original intent of HTML was to give non-programmers a very forgiving language so that even with many mistakes, any one could create web pages.

Experienced programmers found it too forgiving. Any mistake might slip by—or completely ruin a page. It had no logical syntax. Programmers need structure. Nowhere, in this article’s 38,000 words, does it present any of the following how-to tips.

Along with learning by doing, learning from others is a beginning programmer’s mainstay. The easiest introduction to code is modifying it. Find code somewhat similar to your needs; figure out how it works; change it to solve your problem.

This is really no different from finding an example in a book. But there’s far more code out in the real world than in books. And book examples tend to be overly simplistic.

It’s the same, really, when beginning writers are told to steal from the best. Considering the author is both a programmer and a writer, it’s surprising he doesn’t talk more about the connections between the two.

For example, the creative aspect of programming uses the same parts of the brain as creative writing. Not to mention the goal in prose is the same as in programming: clarity.

Beyond modifying other people’s code, programming requires much basic knowledge from books. One should have a perspective from the abstract Turing Machine to the different levels of abstraction of various languages.

Never limit testing and debugging to your primary computer. Know the higher level languages down to assembler and all the way down to machine code. Spend some time with a disassembler to see what compilers actually do with your code.

Finally, minimize the code that depends on outside programs. First, employ the old adage (it was old when I started in the 60s) Keep It Simple, Stupid or K.I.S.S. Optimizing any code that depends on outside programs is unlikely to last very long.

Optimizing increases maintenance. Upgrades for outside programs (operating systems, browsers, and even PDF readers) require more testing of your code. Outside changes keep increasing, forcing a new versions of your code every few years.

It shouldn’t be too hard to write good code because there are so many examples out there of bad code. Unless you can’t distinguish the good from the bad. If that’s the case, then perhaps you should find another line of work.

Beyond Code


The article “What Is Code?” in the July 11 issue of Bloomberg’s BusinessWeek starts out saying, you’re a VP doing an SVP’s job. Your problem is overseeing software development.

Across your desk is “The man in the taupe blazer (TMitTB) [who] works for the new CTO. … [who says] ‘All of the computer code that keeps the website running must be replaced. At one time, it was very valuable … .’ [T]he new CTO thinks it’s garbage. She said the old code is spaghetti … .”

When I hear statements like this, I have to wonder when did the code become garbage? Was there no way to fix it (or at least the critical parts) before it was totally spaghetti?

In short, TmitTB is saying it can’t be fixed. And, apparently, no one knows why. Yet, the decisions that created it are still in place waiting to build the next, newer version of the company’s software. Why will those results be any better?

Addressing this article to VPs (and SVPs), Bloomberg editor Josh Tyrangiel thinks the solution is in the answer to the question, What Is Code? It’s not. Code is only a small part of the problem, although easily the most obscure part.

Code is complicated, but the bigger problem is that it runs in a complicated environment—much of it unknown to the coders. But even the known parts are very complicated. For example, there are many different versions of many different browsers.

Beyond code is this larger environment people (and VPs) need to understand. Unless it’s our job, we don’t need to know the technical innards of our TVs, microwaves, or car engines. We just need to know what to expect and how to get it.

There are fundamental lessons learned over the decades about the total software environment. These are the basics needed to control every company’s software development. First and foremost of these is program maintenance.

Maintenance is not even mentioned in this article. Obscure code is nearly impossible to maintain, i.e., it can’t be fixed so it must be replaced—with a new, equally obscure language.

Old code may not be popular but it can be maintained. COBOL (created by Grace Murray Hopper) has lasted over 55 years because it was essentially self-documenting. Yet the article says it’s “verbose” and “cripples the mind.”

If code is maintained by the people who wrote it, then their careers are the project and won’t advance until the project dies. Unlikely they’ll stay long. If skilled programmers know the new languages get the better jobs, they won’t do maintenance.

The way to quickly kill maintenance is to assign the job to junior programmers. Add obscure code and it’s is two-thirds dead. Then optimize for no reason. The plug is pulled and code must be replaced by newer, more expensive code.

Why Code Is Important


Modern Industrialized nations have based their societies on technology run by software. This code compiles into digital computer instructions. Digital is made of binary switches (or bits) of ones and zeros.

Code, if not written to respond safely to all possible contingencies, may produce a zero when the computer expects a one, or vice versa. This simple error may halt the computer—or worse.

Digital, due to its binary nature, is inherently brittle: it either works or not—there is no sort-of-works. Unlike analog, which degrades gradually over time, digital (if not programmed for all contingencies) may suddenly and unexpectedly stop.

Compare VCRs to DVDs. Tape stretches but it isn’t very noticeable. Tape breaks but can be easily spliced with very little loss of signal. Merely scratching a DVD can make it unplayable. Everything may appear intact, but it just takes one lost bit.

The programs we depend on daily are also brittle but unlikely to lose bits. Or so we think. A sufficient electromagnetic pulse or EMP (think small nuclear device) will destroy those bits unless your machine is “hardened” as are many military computers.

Once upon a time, dividing by zero would stop a computer cold. Since there were too many ways this could occur, the solution was to have the hardware check every division to make sure the divisor was not zero. If it was, the program—not the computer—was halted and an error message displayed.

A program and its data on our hard drives is inaccessible to other programs that write to hard drives. Those programs are constrained by system software, prevented from writing where they shouldn’t.

This solution depends on correct system software. It is not as safe as hardware trapping division by zero. Programs that write where they shouldn’t, are classed as viruses; they know how to get around system software protection.

These are simple examples of potential problems for programmers. Above, I used the word “contingencies” twice. To grasp the extent of possible contingencies, a programmer must be aware of the total environment in which the code must run.

There will be many different computers, each with a unique hardware configuration. Different CPUs means different instruction sets. Many computers have multiple processors (e.g., quad) requiring multi-threaded code.

Code must run on many different operating systems. Even the same operating system is configured differently according to installed updates, hardware drivers, customization, etc.

Then there’s the problem of many unknown programs running simultaneously or interrupting according to their own needs. Or crashing and interfering with the code. Or the intentional disruption of hackers.

It’s easy to see that code running successfully under all these contingencies is far more valuable to society than code that chooses cleverness over safety. Since digital is inherently brittle, code must be robust. Simpler code is easier to secure than complex.

What Is Code?

This is the question asked by the title of the main (38,000 words!) article in the June 11 of Bloomburg’s BusinessWeek. The question is answered thoroughly (38,000 words!) and with patience for the lay reader. But is it the right question?

While it’s useful to know what is code, it’s more important to address the whys of code. Why is code so expensive? Why is code so obtuse? Why is code getting worse? And why is code not the answer?

The question, why is code so expensive? could be better framed as why is the new code so expensive? The answer is because it has so little in common with the old code. The article talks of a newly hired CTO, who inevitably will hire programmers proficient in the new language.

The next question is, why is code so obtuse? It’s as though programmers are a cult believing in the mysticism of the cryptic. Look at the examples in the article and you’ll see a trend over the years towards more concise code. If it’s a game, it was done fifty years ago by Ken Iverson with APL—and done better.

As mentioned in last week’s post, Pride is a bad character trait for programmers. Their goal is to say: I can write that code in one line. (Remember the Quiz Show, “Name That Tune”?)

The preference for cryptic code is a serious problem. For decades it has been estimated that eighty percent (80%) of programming costs is for maintenance. Some other programmer has to modify the code. Unless the laconic language is supplemented by extensive comments, it’s a mind-numbing job (often solved by rewriting whole sections of code).

The question, why is code getting worse? has been partially answered by the responses to the previous two questions. However, there are many more reasons, far too many to enumerate all of them.

The push to newer, more obtuse languages is just one reason. The biggest contributor to worsening code is the increasing inability of programmers (and their managers and project leaders) to see their job as serving users first and foremost.

Clever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.

The question, why is code not the answer? is not simply all of the above. It lies within the much larger picture of the history of programming (of which code and languages are just a part).

Many useful methods have been invented over the decades but fewer and fewer of them are being applied. For example, new software never replaced the old without first extensively testing in the real world.

I can’t say this is never done, but more and more I see instances of software changes put into use without any testing. I’ve even seen bank software being changed as I was using it. There are no excuses for such foolishness in a professional environment.

Programmer Character Traits


The last post listed eight programmer character traits. I’d written about these some thirty years ago. There still seems to be a need to elaborate them, and this seems to be the right time and place.

The positive character traits are Persistence, Problem-Solving, Planning, and Play. It is doubtful that any person completely lacking in any one of these could become a capable programmer.

On Persistence: I have always maintained that the most important muscle used in programming is the one we sit upon. Knowing when to persevere and when to seek help is what gets the job done.

On Problem-Solving: Sometimes we solve problems just for fun, sometimes it can be an obsession. Either way, it must be appropriate to the situation. It is possible to have too much fun finding too clever a solution.

On Planning: Although necessary, it should not expand to use the resources necessary for implementation. Here, too little is as dangerous as too much. The problem is that many people with programming skills may have even better planning skills, which demand to be exercised.

On Play: The freedom of play must not turn into anarchy. Fear of trying cannot keep a good programmer down. But the joy of experimentation must have some boundaries or else the problem at hand is just a game.

The negative character traits are Pride, Pedantry, Perfectionism, and Prejudice. Few of us are completely free of these, but anyone who is ruled by any one of them will never be a good programmer.

On Pride: It keeps us from asking questions. Pride prizes the clever above the practical. Pride is our personal spotlight. If we shine it outwards, it blinds the rest of the world to the fact that we are in the dark. If we shine it on ourselves, we are blinded to what is out there in the dark, the reality beyond our brilliant little circle.

On Pedantry: A pedant is best described as one who cannot learn, not as one who knows. Pedantry is usually seen as forcing your answers upon others. It also can be hiding answers with a knowing nod of the head. Whether concealing or denying doubts about yourself, the result is the same: ignorance.

On Perfectionism: It is the infinite regress of unreachable excellence. Making a program “better” can be a never-ending process, a process that none of us can afford. While we all desire improvement, the action we take must fit our limited resources.

On Prejudice: This is all the assumptions that must be removed before we can see reality. The more that is hidden about a program, the less it will be understood. Assuming that Jones cannot write good code or that Smith never makes a mistake, are obvious prejudices. Prejudgment is the enemy of thought.

More Than Logic


Last week’s post said programming was “More Than Engineering.” I believe there are eight essential programmer character traits. Four are positive: Persistence, Problem-Solving, Planning, and Play. Four are negative: Pride, Pedantry, Perfectionism, and Prejudice.

Beyond character traits, which can be enhanced or suppressed, there are three additional skills that programmers must practice to their utmost. These are Clarity, Empathy, and Imagination.

Clarity is a two-fold goal. The behavior of the program and expected responses from users must be clear and unambiguous. Also, the code itself must be clearly written for any programmer to maintain it.

Messages, especially error messages, are useless if not clear. A recent attempt to copy a file told me only that it could not be copied. Was this a source or a destination problem? I could see the file wasn’t copied, so what was the purpose of the message?

The computer equivalent of putting yourself in another’s shoes is to put yourself at another’s screen and keyboard. Empathy, in programming, is more than a feeling. It’s goal is to anticipate how users will respond to your software—on their machines.

The purpose of multiple windows on a screen was not conceived so each window would take over the whole screen. Yet, that’s what I often see. Graphic User Interfaces are highly customizable, but many programs ignore the possibilities.

My personal pet peeve is windows that ignore the location of the task bar. It can be top, bottom, left, or right, but more than half the programs I use expect it to be at the bottom. (Good if you sit above the screen looking down. I sit below and look up.)

While clarity and empathy are crucial, it takes imagination to create a proper response for every possible situation. Users interact with a total environment, not just this program’s code.

Aside from the purpose of the program, and the myriad things that can go wrong, there is the universe of possible program events—including the illogical. The goal of imagination is to make it more likely these events will be handled safely.

Finally, let’s return to the last character trait: Prejudice. I mean this in the most general sense, i.e., all assumptions are bad. Lack of clarity assumes a message is clear when it’s not. Lack of empathy assumes every user’s machine is just like yours.

Assuming the unlikely will never happen and therefore needs no code to handle it, cripples imagination. Nothing makes code worthless quicker than programming only for the probable.

More Than Engineering


What do great engineering feats have in common with writing software? Let’s look at one of the biggest (many say the greatest engineering achievement of all time): Going to the moon.

Engineering is often called a by-the-book discipline. The engineering books of the day couldn’t get us to the moon; there were far too many significant unknowns. NASA wrote the book as it went along.

There are two ways of handling unknowns. Science, with experimentation, testing, and verification, is one. Going to the moon required massive scientific investigation. No one knew the effects of prolonged weightlessness. (That’s what we call it, but it’s really falling—all the time, non-stop, 24/7.)

The other approach to unknowns requires human ingenuity and intuition. That’s why NASA wanted experienced pilots. As he approached the moon’s surface, Neil Armstrong was running out of fuel as numerous alarms sounded. He made the right choices.

Solving this second class of unknowns is sometimes called a Black Art. If its practitioners succeed more often than not, you can’t call it guesswork. Neil Armstrong was a highly experienced test pilot and he’d faced the unknown many times before.

In the 80s, the powers that be decided programming should become Software Engineering. Unfortunately, it didn’t connect to other engineering disciplines. At the college level, it was more likely found under Computer Science.

So, what is programming? Is it science? Engineering? Or a practical art? Bismark said, “Politics is the art of the possible.” I say, programming is the art of the practical. However, the practical arts don’t have the academic standing of science or engineering.

All engineering is based on scientific principles. All practical arts are based on engineering principles. But the final product comes from the hand of the practitioner, not the engineer or the scientist.

To design a house, an architect obeys the laws of physics, follows engineering guidelines, and uses 3-D computer modeling to feel what it’s like to walk through it. Programmers exercise the software to feel what users will feel.

The NASA astronauts provided that feedback. They took the work of the engineers and the scientists and refined it into workable systems. This is where software is weakest today. It’s time to move beyond Software Engineering.

Post Navigation

Follow

Get every new post delivered to your Inbox.

Join 36 other followers