Digital Minefield

Why The Machines Are Winning

Archive for the category “Exposing the Dangers”

The Software Did It!


Up to now, computers have been blamed only for unintentional failures. Crashes have led the list of computer malfunctions, signified by, “the computer is down.” But these too were unplanned, accidental events.

Now, after years of investigation in many countries, Volkswagen has finally come clean about their dirty diesels. More than merely admitting guilt, they pointed their corporate finger at the actual culprit: the software did it.

Not only acknowledging that the behavior was not accidental but intentionally criminal, they went public, throwing the offending software under the worldwide VW bus.

In one sense, the biggest aspect of this story was the further confirmation that the biggest companies feel compelled to cheat. Are they so insecure, or do they think they’re too big to be caught?

The most surprising aspect to the story for most people is learning just how sophisticated software can be when you want it to cheat. These engines ran clean only when they were being tested for clean emissions.

The strangest aspect to the story is what made them think they could get away with it? As long as testing was required, the software would cheat. Did they not think that eventually it would be found out?

Or did they think they were just buying time? That they could eventually find a fix—in hardware or software—that would produce clean engines? And then replace the cheating software?

The most disturbing aspect to the story is realizing that software is sophisticated beyond our imagination. What are the odds that such cheating can be detected, much less brought to justice?

The most obvious aspect to the story is why do they test in such a way that programs can cheat? Is there no method equivalent to the random drug test for humans? Which brings up another set of questions.

When will we see nano-bots doing that drug testing? Then, how long before someone creates software to cheat on its programming? And the obvious final question, how do we test the testers and their tools, i.e., their testing software?

Advertisement

Spies Like Us


Not too long ago, I suggested drones were out of control. I had no idea. Saw an ad on TV last week for the High Spy Drone. You can continue reading this post or go to their site.

If you’ve seen the video and paid attention, you heard the announcer say “… spy on your neighbors.” Yes, folks, for the small price of only two payments of $19.95 (plus S & H), you too can invade the privacy of anyone living next door.

Why stop there. You can take it on vacation (as the ad suggests) and spy on total strangers. Why not pull up outside a house with a pool and spy on the sunbathers? (Just be prepared to abandon the drone and made a quick getaway.)

Okay, so it’s just a toy (less than a foot square). And it’s likely that the batteries won’t keep it airborne for its full 75 minutes of video. But for that low, low price you get TWO high spy drones.

Like I said, it’s just a toy and you couldn’t add an ounce of payload to cause any real (physical) damage. The damage will depend on the pictures you take and what you do with them.

Speaking of pictures, the ad says the device’s range is 160 feet. So if you want video from 50 feet high, you can be up to 152 feet away from your target. You might even be able to spy on people two houses away.

Speaking of ads, I didn’t catch the product name in the ad and my online search failed the first day. The next day I came up with a better search phrase, “spy drone tv ad,” and wondered if anyone had a site for things seen on TV.

They did. It’s called ispot.tv and here’s the link for the high spy drone. However, this site is not just for toys. It’s for “Real-Time TV Advertising Metrics.” It’s an actual tool for media planning, ad effectiveness, and competitive analysis.

Interesting, but I digress. The issue here, the non-toy, non-joke concern is privacy. The fruits of the ever-shrinking world of digital are just beginning to appear. The technology that stabilizes this drone is very high-tech—and getting smaller.

As for privacy, the odds are against us. Since it wasn’t mentioned explicitly in the Constitution, government is slow to derive, and enforce, privacy rights. It’s not much help when it comes to electronic invasion, so why expect any when it comes to physical spying?

If you find your airspace being invaded by a spy drone, I don’t recommend the family shotgun. Instead, I’d get a T-shirt gun, like they use at concerts. For ammo, you’ll need a net with small weights in the corners. Maybe you’ll see it advertised on TV.

Worst Idea, Part Two


There are so many things wrong with the idea of autonomous weapons, it’s hard to know where the list ends. For example, take every bad news story involving guns, drones, or even high-speed chases, and add AI. That future is chaos.

Drones interfering with fire-fighting planes in California is just a beginning. Soon the news will be filled with more drones in more situations generating more chaos. AI is just itching to get control of drones.

If a weapon is truly autonomous, won’t it be able to determine its own targets? If not, then how can all its possible targets be programmed in advance? Either method of targeting is risky.

Will such weapons have defensive capabilities? Given what they will cost, I’m sure their designers will build-in whatever defenses they consider sufficient to carry out the mission.

How much of that defense will be directed at deceiving computer systems? How much to deceive humans? Think transformers. Not the gigantic CGI silliness of the movies, but smaller, unobtrusive objects—like a London phone booth.

Deceptions are only one part of the AI puzzle. Can the designers guarantee any autonomous weapon will be unhackable? And if not hackable, are they protected against simple sabotage?

To put this in another context: If the device has a mind, it can be changed. And if it’s changed in ways not detectable by its makers, it will wreak havoc before it can be destroyed.

Autonomous weapons are just another step in technology’s climb to superiority. But we already have overwhelming weapons superiority—and it doesn’t bring victory, or even peace of mind.

We are currently engaged with an enemy, IS, where we have an enormous technological advantage. Yet, we no strategic advantage and the outcome is unpredictable. How will more technology help?

Who really thinks that if our weapons don’t risk lives on a battlefield, the enemy will do likewise? We’re already struggling with a relative handful of terrorists, whose primary targets are humans.

The bottom line in the use of autonomous weapons is their offensive use cannot stop the enemy from targeting our civilians. Autonomous weapons can’t prevent the random acts of terrorism we now encounter on our home soil.

Unless some AI genius decides autonomous weapons should be employed in defending our civilians. Remember, In the first RoboCop movie, the huge crime-fighting robot (ED-209) that went berserk? Will that fictional past become our real future?

Optimizing Windows


Optimizing windows is a bad idea. Not Windows the operating system (although I could give good reasons for that), but the concept of windows employed by every graphic user interface (GUI).

I’ve used the term “optimize” a lot in recent posts. So much that I began to wonder if it’s the right word. Compare this definition, “to make as perfect, effective, or functional as possible” to this one, “to make the most of.” The first was for “optimize” but the second was for “maximize.” Not much difference.

In WindowsSpeak, to “Maximize” a window means to enlarge it to fill the display. To fill a window with content is to optimize it’s space on screen. Or you could say this was maximizing the content.

Two points: One, the intent of early GUI designers was to have many windows on screen. Two, every bad web page designer wants his window to fill the screen and to fill that window with content. Why bother with windows; just call them screens.

The idea of a windows-based GUI (or WIMP: window, icon, menu and pointing device) began at Xerox PARC (Palo Alto Research Center) in 1973. Apple’s Mac system showed up in 1984, followed by Microsoft’s Windows the next year.

There have been a score of GUIs. Many, like the Unix-based X Windows, were far superior graphically. At the time, the displays on our desktops could not compete with those of main-frame terminals.

Since they lacked the real estate and the resolution, early PC programmers needed bigger windows (i.e., more of the screen). Now our screens are easily the equal of those earlier terminals in resolution and size. But PC programmers, especially web page designers, still—unnecessarily—want it all.

It’s like a war: Every window (program) for itself. And all clamoring for your attention. Almost every morning when I boot up some program wants my immediate attention. As if what I might want to do could not possibly be more important.

At one end their arsenal has endless upgrades. At the other end are endlessly annoying pop-ups. (How long have we been trying to kill pop-ups? I forget.) No program can win this war. All this conflict achieves is ever-worsening computer experiences.

To ask some programmers not to optimize is an affront to their egos. Yet, optimizing desktop web pages is why other programmers must create entirely separate and independent mobile web pages. This takes twice the effort (and more than twice the cost), but don’t ever ask ego-driven programmers to settle for less than all the pixels.

Women Programmers


In an era short of capable programmers (this one), we often hear the question, Why aren’t there more women programmers? Having observed major changes in the industry in recent years, I think a better question would be, Why are there so many male programmers?

In my post of June 15, I described programmer character traits. I think if we examine these as to gender, we can see the negative effects of intellectually capable but socially awkward nerds.

For example, nerds overdo the positive character trait of Problem-Solving. Brain teasers can be fun, but too much becomes an anti-social obsession. Any positive character trait carried to extremes can become a negative.

Persistence is necessary, but when focus on a problem becomes so narrow that a programmer can’t admit he needs help, persistence is no longer positive. (I use the masculine pronoun because far more men have this failing.)

Another thing that defines nerds is a foolish pride in their own cleverness. As I said in the post of June 22, “[c]lever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.”

In the post of June 8, I pointed out programmers need more than logic. They need clarity, empathy, and imagination. Unlike most women, nerds are unaware of the feelings of others. They are apathetic not empathetic.

Women programmers care about the users of their code. Nerds only want to impress them. The immaturity of nerds creates a need to feel superior to users. Women care about the user’s experience.

Our society has become so dependent on computers, we’re willing to tolerate the social inept as long as we can pick their brains. This is cute when sub-teens help a clueless adult figure out email. It’s something else when it’s the prevailing climate of the programming workplace.

By focusing on code instead of users, computing has become dependent on nerds. Our attitude towards technology is now the nerd’s attitude. Social media has replaced face-to-face interaction because nerds find social risk uncomfortable. Movies are no longer character or plot driven—they are CGI driven. Nerds seek style over substance. And so it is in programming.

Because most aren’t nerds, there are too few women in programming. Nerds dictate the climate. They don’t need to be in charge or make the decisions, because they define the choices for management. The nerds have won.

How To Code

If “What Is Code?” is not the answer for suffering VPs and SVPs, what should the answer look like? How about “How to”? Or have we forgotten the effectiveness of learning by doing?

People without any programming training can produce acceptable web pages. In fact, the original intent of HTML was to give non-programmers a very forgiving language so that even with many mistakes, any one could create web pages.

Experienced programmers found it too forgiving. Any mistake might slip by—or completely ruin a page. It had no logical syntax. Programmers need structure. Nowhere, in this article’s 38,000 words, does it present any of the following how-to tips.

Along with learning by doing, learning from others is a beginning programmer’s mainstay. The easiest introduction to code is modifying it. Find code somewhat similar to your needs; figure out how it works; change it to solve your problem.

This is really no different from finding an example in a book. But there’s far more code out in the real world than in books. And book examples tend to be overly simplistic.

It’s the same, really, when beginning writers are told to steal from the best. Considering the author is both a programmer and a writer, it’s surprising he doesn’t talk more about the connections between the two.

For example, the creative aspect of programming uses the same parts of the brain as creative writing. Not to mention the goal in prose is the same as in programming: clarity.

Beyond modifying other people’s code, programming requires much basic knowledge from books. One should have a perspective from the abstract Turing Machine to the different levels of abstraction of various languages.

Never limit testing and debugging to your primary computer. Know the higher level languages down to assembler and all the way down to machine code. Spend some time with a disassembler to see what compilers actually do with your code.

Finally, minimize the code that depends on outside programs. First, employ the old adage (it was old when I started in the 60s) Keep It Simple, Stupid or K.I.S.S. Optimizing any code that depends on outside programs is unlikely to last very long.

Optimizing increases maintenance. Upgrades for outside programs (operating systems, browsers, and even PDF readers) require more testing of your code. Outside changes keep increasing, forcing a new versions of your code every few years.

It shouldn’t be too hard to write good code because there are so many examples out there of bad code. Unless you can’t distinguish the good from the bad. If that’s the case, then perhaps you should find another line of work.

Beyond Code


The article “What Is Code?” in the July 11 issue of Bloomberg’s BusinessWeek starts out saying, you’re a VP doing an SVP’s job. Your problem is overseeing software development.

Across your desk is “The man in the taupe blazer (TMitTB) [who] works for the new CTO. … [who says] ‘All of the computer code that keeps the website running must be replaced. At one time, it was very valuable … .’ [T]he new CTO thinks it’s garbage. She said the old code is spaghetti … .”

When I hear statements like this, I have to wonder when did the code become garbage? Was there no way to fix it (or at least the critical parts) before it was totally spaghetti?

In short, TmitTB is saying it can’t be fixed. And, apparently, no one knows why. Yet, the decisions that created it are still in place waiting to build the next, newer version of the company’s software. Why will those results be any better?

Addressing this article to VPs (and SVPs), Bloomberg editor Josh Tyrangiel thinks the solution is in the answer to the question, What Is Code? It’s not. Code is only a small part of the problem, although easily the most obscure part.

Code is complicated, but the bigger problem is that it runs in a complicated environment—much of it unknown to the coders. But even the known parts are very complicated. For example, there are many different versions of many different browsers.

Beyond code is this larger environment people (and VPs) need to understand. Unless it’s our job, we don’t need to know the technical innards of our TVs, microwaves, or car engines. We just need to know what to expect and how to get it.

There are fundamental lessons learned over the decades about the total software environment. These are the basics needed to control every company’s software development. First and foremost of these is program maintenance.

Maintenance is not even mentioned in this article. Obscure code is nearly impossible to maintain, i.e., it can’t be fixed so it must be replaced—with a new, equally obscure language.

Old code may not be popular but it can be maintained. COBOL (created by Grace Murray Hopper) has lasted over 55 years because it was essentially self-documenting. Yet the article says it’s “verbose” and “cripples the mind.”

If code is maintained by the people who wrote it, then their careers are the project and won’t advance until the project dies. Unlikely they’ll stay long. If skilled programmers know the new languages get the better jobs, they won’t do maintenance.

The way to quickly kill maintenance is to assign the job to junior programmers. Add obscure code and it’s is two-thirds dead. Then optimize for no reason. The plug is pulled and code must be replaced by newer, more expensive code.

Why Code Is Important


Modern Industrialized nations have based their societies on technology run by software. This code compiles into digital computer instructions. Digital is made of binary switches (or bits) of ones and zeros.

Code, if not written to respond safely to all possible contingencies, may produce a zero when the computer expects a one, or vice versa. This simple error may halt the computer—or worse.

Digital, due to its binary nature, is inherently brittle: it either works or not—there is no sort-of-works. Unlike analog, which degrades gradually over time, digital (if not programmed for all contingencies) may suddenly and unexpectedly stop.

Compare VCRs to DVDs. Tape stretches but it isn’t very noticeable. Tape breaks but can be easily spliced with very little loss of signal. Merely scratching a DVD can make it unplayable. Everything may appear intact, but it just takes one lost bit.

The programs we depend on daily are also brittle but unlikely to lose bits. Or so we think. A sufficient electromagnetic pulse or EMP (think small nuclear device) will destroy those bits unless your machine is “hardened” as are many military computers.

Once upon a time, dividing by zero would stop a computer cold. Since there were too many ways this could occur, the solution was to have the hardware check every division to make sure the divisor was not zero. If it was, the program—not the computer—was halted and an error message displayed.

A program and its data on our hard drives is inaccessible to other programs that write to hard drives. Those programs are constrained by system software, prevented from writing where they shouldn’t.

This solution depends on correct system software. It is not as safe as hardware trapping division by zero. Programs that write where they shouldn’t, are classed as viruses; they know how to get around system software protection.

These are simple examples of potential problems for programmers. Above, I used the word “contingencies” twice. To grasp the extent of possible contingencies, a programmer must be aware of the total environment in which the code must run.

There will be many different computers, each with a unique hardware configuration. Different CPUs means different instruction sets. Many computers have multiple processors (e.g., quad) requiring multi-threaded code.

Code must run on many different operating systems. Even the same operating system is configured differently according to installed updates, hardware drivers, customization, etc.

Then there’s the problem of many unknown programs running simultaneously or interrupting according to their own needs. Or crashing and interfering with the code. Or the intentional disruption of hackers.

It’s easy to see that code running successfully under all these contingencies is far more valuable to society than code that chooses cleverness over safety. Since digital is inherently brittle, code must be robust. Simpler code is easier to secure than complex.

What Is Code?

This is the question asked by the title of the main (38,000 words!) article in the June 11 of Bloomburg’s BusinessWeek. The question is answered thoroughly (38,000 words!) and with patience for the lay reader. But is it the right question?

While it’s useful to know what is code, it’s more important to address the whys of code. Why is code so expensive? Why is code so obtuse? Why is code getting worse? And why is code not the answer?

The question, why is code so expensive? could be better framed as why is the new code so expensive? The answer is because it has so little in common with the old code. The article talks of a newly hired CTO, who inevitably will hire programmers proficient in the new language.

The next question is, why is code so obtuse? It’s as though programmers are a cult believing in the mysticism of the cryptic. Look at the examples in the article and you’ll see a trend over the years towards more concise code. If it’s a game, it was done fifty years ago by Ken Iverson with APL—and done better.

As mentioned in last week’s post, Pride is a bad character trait for programmers. Their goal is to say: I can write that code in one line. (Remember the Quiz Show, “Name That Tune”?)

The preference for cryptic code is a serious problem. For decades it has been estimated that eighty percent (80%) of programming costs is for maintenance. Some other programmer has to modify the code. Unless the laconic language is supplemented by extensive comments, it’s a mind-numbing job (often solved by rewriting whole sections of code).

The question, why is code getting worse? has been partially answered by the responses to the previous two questions. However, there are many more reasons, far too many to enumerate all of them.

The push to newer, more obtuse languages is just one reason. The biggest contributor to worsening code is the increasing inability of programmers (and their managers and project leaders) to see their job as serving users first and foremost.

Clever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.

The question, why is code not the answer? is not simply all of the above. It lies within the much larger picture of the history of programming (of which code and languages are just a part).

Many useful methods have been invented over the decades but fewer and fewer of them are being applied. For example, new software never replaced the old without first extensively testing in the real world.

I can’t say this is never done, but more and more I see instances of software changes put into use without any testing. I’ve even seen bank software being changed as I was using it. There are no excuses for such foolishness in a professional environment.

Programmer Character Traits


The last post listed eight programmer character traits. I’d written about these some thirty years ago. There still seems to be a need to elaborate them, and this seems to be the right time and place.

The positive character traits are Persistence, Problem-Solving, Planning, and Play. It is doubtful that any person completely lacking in any one of these could become a capable programmer.

On Persistence: I have always maintained that the most important muscle used in programming is the one we sit upon. Knowing when to persevere and when to seek help is what gets the job done.

On Problem-Solving: Sometimes we solve problems just for fun, sometimes it can be an obsession. Either way, it must be appropriate to the situation. It is possible to have too much fun finding too clever a solution.

On Planning: Although necessary, it should not expand to use the resources necessary for implementation. Here, too little is as dangerous as too much. The problem is that many people with programming skills may have even better planning skills, which demand to be exercised.

On Play: The freedom of play must not turn into anarchy. Fear of trying cannot keep a good programmer down. But the joy of experimentation must have some boundaries or else the problem at hand is just a game.

The negative character traits are Pride, Pedantry, Perfectionism, and Prejudice. Few of us are completely free of these, but anyone who is ruled by any one of them will never be a good programmer.

On Pride: It keeps us from asking questions. Pride prizes the clever above the practical. Pride is our personal spotlight. If we shine it outwards, it blinds the rest of the world to the fact that we are in the dark. If we shine it on ourselves, we are blinded to what is out there in the dark, the reality beyond our brilliant little circle.

On Pedantry: A pedant is best described as one who cannot learn, not as one who knows. Pedantry is usually seen as forcing your answers upon others. It also can be hiding answers with a knowing nod of the head. Whether concealing or denying doubts about yourself, the result is the same: ignorance.

On Perfectionism: It is the infinite regress of unreachable excellence. Making a program “better” can be a never-ending process, a process that none of us can afford. While we all desire improvement, the action we take must fit our limited resources.

On Prejudice: This is all the assumptions that must be removed before we can see reality. The more that is hidden about a program, the less it will be understood. Assuming that Jones cannot write good code or that Smith never makes a mistake, are obvious prejudices. Prejudgment is the enemy of thought.

Post Navigation