Digital Minefield

Why The Machines Are Winning

Social Media’s Biggest Lie


This time it was the news that made the news. This time, instead of hearing about the killer’s social media from an investigation, we heard it in real time from the killer himself. He made social media the very essence of his crime.

I wondered how a psychopath had a social media network. Then it all came back to me. News reports of senseless killings over many decades. And how, until now in the age of social media, all those killers were described as “loners.”

Maybe it began with Columbine (April 20, 1999). Although the influence of social media wasn’t as obvious there because it was a shared psychosis and seen as an extreme folie à deux. Maybe, but they were loners.

However, since Columbine, the extensive usage of social media has been the common element the news has given us in lieu of the more cryptic term, loner. Yet, for all this data we have learned nothing about how these disturbed people became out-and-out psychopaths.

Instead, we are left with a pile of meaningless social media connections. As though there was some understanding of the actions of these psychopaths that could be gained by exploring their social media movements.

Far too many people seem unaware that we become human only though interaction with other humans. This interaction is not only what makes us human, it’s what keeps us human.

It would also seem that most people are unable to distinguish the unreality of social media’s virtual interactions from actual face-to-face, one-on-one human interaction. The news media acts as though social media gives loners real connections.

What nonsense! It’s their actions, not their social media connections that identifies people as loners. It’s their lack of real human interactions that labels them. But what is real for such disturbed people?

They each have their own reality. The rest of the world calls it virtual but that has no effect since the disturbed think it’s real—just as they believe their grievances justify the use of weapons.

Social media is “… the illusion of companionship without the demands of friendship.” —Sherry Turkle, Alone Together: Why we experience more from technology and less from each other.

Disturbed people without real friends are as likely to harm themselves as others. Using social media to deceive ourselves into thinking virtual is actual human contact will end in disaster.

We could avoid some future disasters if we remove the possibility of interrupting live broadcasts. This seven-second or profanity delay has been available for decades.

The NRA’s big lie is anyone can own a gun without any need for proper training. That is the same as saying any idiot can use one, which turns out to be true. Using it correctly is another story.

Social media’s biggest lie is that virtual friends can help with real problems. Guns don’t solve personal problems, people do. That is, real people, not virtual people.

Technology’s Fatal Flaw


Two ideas came to me last week and I struggled with them until I realized they were both the same idea, just expressed differently. For this post in Digital Minefield, it is expressed as “Technology’s Fatal Flaw.” For my post in Pelf and Weal, it is called “Gambler’s Paradise.”

Daily in the news we hear of technological failures. Only the problems are attributed to separate specific sources, like drones and abandoned mines. No one sees the risk-taking of technology as the common element.

It’s easier to blame technology, that is new technology, for society’s inability to control drones. It’s not so easy to see that exactly the same moral approach has lead to a quarter of a million abandoned mines here in the US.

Where do we draw the line between scientific experimentation and technological innovation? In the eighteenth century, chemists rushed to discover new elements. Often they did so by smelling the results of chemical reactions. It killed some of them.

Many, however, got rich. In England, the best were made peers of the realm. Most were not simply chemists but also inventors, lecturers, and famous authors. We remember the successful ones and forget the risks they took.

No one has forgotten the risks taken by Marie Curie. The radioactivity she discovered—and that killed her—made her famous in her lifetime (two Nobel prizes). We forget such risk-taking was the norm.

Most of the risk-taking in the days of get-rich-quick mining centered around success or failure. Less discussed were the actual physical dangers. Never mentioned were the costs to posterity.

This was true for the precursors of the chemists, the alchemists, so it remains true for their modern day atomic wizards. Society has committed to the risk of nuclear reactors without any viable solution for its extraordinarily dangerous waste product, plutonium—deadly for 25,000 years.

It is obvious that any new technology (and science) has always been ahead of laws to regulate it. By definition, if it’s really new, how could there be laws in place to deal with it? We have no answer, because we are technology’s fatal flaw.

Who’s In Control?


I’ve written a lot lately about autonomous vehicles, weapons, etc. In the news right now are remote-controlled drones interfering with California fire fighters. What’s the connection? Whether you’re on the annoying end of self-driving cars or a human-driven drones, it’s all the same.

What’s my point? When it comes to laws prohibiting or regulating their actions, devices must be treated based on their actions and capabilities. The state of their autonomy has nothing to do with the case.

This is also true when it comes to finding the culprit. If a device breaks the law (or a regulation) then the responsible party must pay the penalty. If the device is autonomous, it will be hard to determine who sent it on its independent mission.

In other words, before we can have any kind of autonomous device, we need enforceable laws to connect the intent of the person controlling the device to its actions. As you might imagine, this will be more difficult than identifying a person with a drone.

Wait a minute! Can we even do that? If a drone can be connected to a controller device—and then to the person using that device—then why are California fire fighters having these problems?

It seems implausible the drone controller could possibly control more than one drone. However, instead of a unique identifier between each drone and its controller, suppose the manufacturer uses only a hundred unique identifiers for the thousands of drones it makes. Or maybe only a dozen.

In as much as the drone buyers do not have to register the identifier (nor is there a law requiring sellers to keep such records), the only way an errant drone could be prosecuted would be to get its identifier and find its controller.

The latter task requires searching an area whose radius is the maximum control distance for this model. Assuming the drone owner is stupid enough to keep the controller after the drone didn’t come back. Assuming the drone owner was operating from a house and not a car.

Without a complete registry of totally unique drone and controller ids, these devices are free to fly wherever the owner wants. Unlike a gun that identifies the bullets it shoots, a drone controller can’t be traced.

These rabbits have clearly left Pandora’s hat. Short of declaring all existing drones illegal (i.e., no totally unique identifier), there is no way for society to control the use of these devices.

However, we have the opportunity to pass such laws for autonomous devices not yet on the market. The real question is: Does society have the will? I doubt it, since it’s not too late to redo the drones and I see no inclination to do so.

Who would have thought that a technology as innocuous as toy drones could expand into total social chaos? As for banning of autonomous weapons, the military will resist ids. And I can see the NRA in the wings salivating at the chance to put in its two cents.

Worst Idea, Part Two


There are so many things wrong with the idea of autonomous weapons, it’s hard to know where the list ends. For example, take every bad news story involving guns, drones, or even high-speed chases, and add AI. That future is chaos.

Drones interfering with fire-fighting planes in California is just a beginning. Soon the news will be filled with more drones in more situations generating more chaos. AI is just itching to get control of drones.

If a weapon is truly autonomous, won’t it be able to determine its own targets? If not, then how can all its possible targets be programmed in advance? Either method of targeting is risky.

Will such weapons have defensive capabilities? Given what they will cost, I’m sure their designers will build-in whatever defenses they consider sufficient to carry out the mission.

How much of that defense will be directed at deceiving computer systems? How much to deceive humans? Think transformers. Not the gigantic CGI silliness of the movies, but smaller, unobtrusive objects—like a London phone booth.

Deceptions are only one part of the AI puzzle. Can the designers guarantee any autonomous weapon will be unhackable? And if not hackable, are they protected against simple sabotage?

To put this in another context: If the device has a mind, it can be changed. And if it’s changed in ways not detectable by its makers, it will wreak havoc before it can be destroyed.

Autonomous weapons are just another step in technology’s climb to superiority. But we already have overwhelming weapons superiority—and it doesn’t bring victory, or even peace of mind.

We are currently engaged with an enemy, IS, where we have an enormous technological advantage. Yet, we no strategic advantage and the outcome is unpredictable. How will more technology help?

Who really thinks that if our weapons don’t risk lives on a battlefield, the enemy will do likewise? We’re already struggling with a relative handful of terrorists, whose primary targets are humans.

The bottom line in the use of autonomous weapons is their offensive use cannot stop the enemy from targeting our civilians. Autonomous weapons can’t prevent the random acts of terrorism we now encounter on our home soil.

Unless some AI genius decides autonomous weapons should be employed in defending our civilians. Remember, In the first RoboCop movie, the huge crime-fighting robot (ED-209) that went berserk? Will that fictional past become our real future?

Worst Idea Ever


I assume by now you’ve heard about the ban on AI weapons proposed in a letter signed by over 1000 AI experts, including Elon Musk, Steve Wozniak, and Stephen Hawking. The letter was presented last week at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The rest of us have been occupied with threats of various kinds of autonomous vehicles (cars, trucks, drones), and we failed to see what was right around the corner. Autonomous weapons are more than battlefield robot-soldiers (unlikely) or gun-toting drones (likely). And weapons are more than guns.

Robots have no qualms about sacrificing themselves. It’s a different kind of warfare when the weapons deliverer is the weapon. It’s kamikaze on steroids.

Only now do I realize, after writing a dozen posts about autonomous autos, that they and their ilk are merely stalking horses for the worst idea ever. Autos were just the beginning. Add long haul trucks, obviously, because they’re already being tested.

How about local trucks? Post Office? UPS? How about city buses? School buses? Don’t forget Amazon’s drones. Can larger autonomous drones replace helicopters? For news? Police surveillance? Medivacs?

Look at it another way. Instead of using robots to replace humans, autonomous vehicles are replacing the entire job, e.g., driving a truck. The idea behind all these autonomous devices is to get the public used to the concept, so they won’t question any of them, even those that go way over the top.

Aside from giving the people in control more control using the leverage of computers, there’s the general degradation of the populace by making them less valued than the robots that replace them.

How did humans come to this insane position? Here’s how. People who control the machines think not only are they so much smarter than other people (e.g., the ones they want to replace with robots), they think they can make computers smarter than other people. This is the AI they seek.

And there are some so enamored of intelligence in any form that if they succeed at making a superhuman artificial intelligence—one even smarter than themselves—they will bow down and worship it. Even as it destroys them.

Optimizing Windows


Optimizing windows is a bad idea. Not Windows the operating system (although I could give good reasons for that), but the concept of windows employed by every graphic user interface (GUI).

I’ve used the term “optimize” a lot in recent posts. So much that I began to wonder if it’s the right word. Compare this definition, “to make as perfect, effective, or functional as possible” to this one, “to make the most of.” The first was for “optimize” but the second was for “maximize.” Not much difference.

In WindowsSpeak, to “Maximize” a window means to enlarge it to fill the display. To fill a window with content is to optimize it’s space on screen. Or you could say this was maximizing the content.

Two points: One, the intent of early GUI designers was to have many windows on screen. Two, every bad web page designer wants his window to fill the screen and to fill that window with content. Why bother with windows; just call them screens.

The idea of a windows-based GUI (or WIMP: window, icon, menu and pointing device) began at Xerox PARC (Palo Alto Research Center) in 1973. Apple’s Mac system showed up in 1984, followed by Microsoft’s Windows the next year.

There have been a score of GUIs. Many, like the Unix-based X Windows, were far superior graphically. At the time, the displays on our desktops could not compete with those of main-frame terminals.

Since they lacked the real estate and the resolution, early PC programmers needed bigger windows (i.e., more of the screen). Now our screens are easily the equal of those earlier terminals in resolution and size. But PC programmers, especially web page designers, still—unnecessarily—want it all.

It’s like a war: Every window (program) for itself. And all clamoring for your attention. Almost every morning when I boot up some program wants my immediate attention. As if what I might want to do could not possibly be more important.

At one end their arsenal has endless upgrades. At the other end are endlessly annoying pop-ups. (How long have we been trying to kill pop-ups? I forget.) No program can win this war. All this conflict achieves is ever-worsening computer experiences.

To ask some programmers not to optimize is an affront to their egos. Yet, optimizing desktop web pages is why other programmers must create entirely separate and independent mobile web pages. This takes twice the effort (and more than twice the cost), but don’t ever ask ego-driven programmers to settle for less than all the pixels.

Women Programmers


In an era short of capable programmers (this one), we often hear the question, Why aren’t there more women programmers? Having observed major changes in the industry in recent years, I think a better question would be, Why are there so many male programmers?

In my post of June 15, I described programmer character traits. I think if we examine these as to gender, we can see the negative effects of intellectually capable but socially awkward nerds.

For example, nerds overdo the positive character trait of Problem-Solving. Brain teasers can be fun, but too much becomes an anti-social obsession. Any positive character trait carried to extremes can become a negative.

Persistence is necessary, but when focus on a problem becomes so narrow that a programmer can’t admit he needs help, persistence is no longer positive. (I use the masculine pronoun because far more men have this failing.)

Another thing that defines nerds is a foolish pride in their own cleverness. As I said in the post of June 22, “[c]lever programmers rarely see beyond their own egos. They would rather their code dazzles the user than be transparent and do the job with a minimum of fuss and muss.”

In the post of June 8, I pointed out programmers need more than logic. They need clarity, empathy, and imagination. Unlike most women, nerds are unaware of the feelings of others. They are apathetic not empathetic.

Women programmers care about the users of their code. Nerds only want to impress them. The immaturity of nerds creates a need to feel superior to users. Women care about the user’s experience.

Our society has become so dependent on computers, we’re willing to tolerate the social inept as long as we can pick their brains. This is cute when sub-teens help a clueless adult figure out email. It’s something else when it’s the prevailing climate of the programming workplace.

By focusing on code instead of users, computing has become dependent on nerds. Our attitude towards technology is now the nerd’s attitude. Social media has replaced face-to-face interaction because nerds find social risk uncomfortable. Movies are no longer character or plot driven—they are CGI driven. Nerds seek style over substance. And so it is in programming.

Because most aren’t nerds, there are too few women in programming. Nerds dictate the climate. They don’t need to be in charge or make the decisions, because they define the choices for management. The nerds have won.

How To Code

If “What Is Code?” is not the answer for suffering VPs and SVPs, what should the answer look like? How about “How to”? Or have we forgotten the effectiveness of learning by doing?

People without any programming training can produce acceptable web pages. In fact, the original intent of HTML was to give non-programmers a very forgiving language so that even with many mistakes, any one could create web pages.

Experienced programmers found it too forgiving. Any mistake might slip by—or completely ruin a page. It had no logical syntax. Programmers need structure. Nowhere, in this article’s 38,000 words, does it present any of the following how-to tips.

Along with learning by doing, learning from others is a beginning programmer’s mainstay. The easiest introduction to code is modifying it. Find code somewhat similar to your needs; figure out how it works; change it to solve your problem.

This is really no different from finding an example in a book. But there’s far more code out in the real world than in books. And book examples tend to be overly simplistic.

It’s the same, really, when beginning writers are told to steal from the best. Considering the author is both a programmer and a writer, it’s surprising he doesn’t talk more about the connections between the two.

For example, the creative aspect of programming uses the same parts of the brain as creative writing. Not to mention the goal in prose is the same as in programming: clarity.

Beyond modifying other people’s code, programming requires much basic knowledge from books. One should have a perspective from the abstract Turing Machine to the different levels of abstraction of various languages.

Never limit testing and debugging to your primary computer. Know the higher level languages down to assembler and all the way down to machine code. Spend some time with a disassembler to see what compilers actually do with your code.

Finally, minimize the code that depends on outside programs. First, employ the old adage (it was old when I started in the 60s) Keep It Simple, Stupid or K.I.S.S. Optimizing any code that depends on outside programs is unlikely to last very long.

Optimizing increases maintenance. Upgrades for outside programs (operating systems, browsers, and even PDF readers) require more testing of your code. Outside changes keep increasing, forcing a new versions of your code every few years.

It shouldn’t be too hard to write good code because there are so many examples out there of bad code. Unless you can’t distinguish the good from the bad. If that’s the case, then perhaps you should find another line of work.

Beyond Code


The article “What Is Code?” in the July 11 issue of Bloomberg’s BusinessWeek starts out saying, you’re a VP doing an SVP’s job. Your problem is overseeing software development.

Across your desk is “The man in the taupe blazer (TMitTB) [who] works for the new CTO. … [who says] ‘All of the computer code that keeps the website running must be replaced. At one time, it was very valuable … .’ [T]he new CTO thinks it’s garbage. She said the old code is spaghetti … .”

When I hear statements like this, I have to wonder when did the code become garbage? Was there no way to fix it (or at least the critical parts) before it was totally spaghetti?

In short, TmitTB is saying it can’t be fixed. And, apparently, no one knows why. Yet, the decisions that created it are still in place waiting to build the next, newer version of the company’s software. Why will those results be any better?

Addressing this article to VPs (and SVPs), Bloomberg editor Josh Tyrangiel thinks the solution is in the answer to the question, What Is Code? It’s not. Code is only a small part of the problem, although easily the most obscure part.

Code is complicated, but the bigger problem is that it runs in a complicated environment—much of it unknown to the coders. But even the known parts are very complicated. For example, there are many different versions of many different browsers.

Beyond code is this larger environment people (and VPs) need to understand. Unless it’s our job, we don’t need to know the technical innards of our TVs, microwaves, or car engines. We just need to know what to expect and how to get it.

There are fundamental lessons learned over the decades about the total software environment. These are the basics needed to control every company’s software development. First and foremost of these is program maintenance.

Maintenance is not even mentioned in this article. Obscure code is nearly impossible to maintain, i.e., it can’t be fixed so it must be replaced—with a new, equally obscure language.

Old code may not be popular but it can be maintained. COBOL (created by Grace Murray Hopper) has lasted over 55 years because it was essentially self-documenting. Yet the article says it’s “verbose” and “cripples the mind.”

If code is maintained by the people who wrote it, then their careers are the project and won’t advance until the project dies. Unlikely they’ll stay long. If skilled programmers know the new languages get the better jobs, they won’t do maintenance.

The way to quickly kill maintenance is to assign the job to junior programmers. Add obscure code and it’s is two-thirds dead. Then optimize for no reason. The plug is pulled and code must be replaced by newer, more expensive code.

Why Code Is Important


Modern Industrialized nations have based their societies on technology run by software. This code compiles into digital computer instructions. Digital is made of binary switches (or bits) of ones and zeros.

Code, if not written to respond safely to all possible contingencies, may produce a zero when the computer expects a one, or vice versa. This simple error may halt the computer—or worse.

Digital, due to its binary nature, is inherently brittle: it either works or not—there is no sort-of-works. Unlike analog, which degrades gradually over time, digital (if not programmed for all contingencies) may suddenly and unexpectedly stop.

Compare VCRs to DVDs. Tape stretches but it isn’t very noticeable. Tape breaks but can be easily spliced with very little loss of signal. Merely scratching a DVD can make it unplayable. Everything may appear intact, but it just takes one lost bit.

The programs we depend on daily are also brittle but unlikely to lose bits. Or so we think. A sufficient electromagnetic pulse or EMP (think small nuclear device) will destroy those bits unless your machine is “hardened” as are many military computers.

Once upon a time, dividing by zero would stop a computer cold. Since there were too many ways this could occur, the solution was to have the hardware check every division to make sure the divisor was not zero. If it was, the program—not the computer—was halted and an error message displayed.

A program and its data on our hard drives is inaccessible to other programs that write to hard drives. Those programs are constrained by system software, prevented from writing where they shouldn’t.

This solution depends on correct system software. It is not as safe as hardware trapping division by zero. Programs that write where they shouldn’t, are classed as viruses; they know how to get around system software protection.

These are simple examples of potential problems for programmers. Above, I used the word “contingencies” twice. To grasp the extent of possible contingencies, a programmer must be aware of the total environment in which the code must run.

There will be many different computers, each with a unique hardware configuration. Different CPUs means different instruction sets. Many computers have multiple processors (e.g., quad) requiring multi-threaded code.

Code must run on many different operating systems. Even the same operating system is configured differently according to installed updates, hardware drivers, customization, etc.

Then there’s the problem of many unknown programs running simultaneously or interrupting according to their own needs. Or crashing and interfering with the code. Or the intentional disruption of hackers.

It’s easy to see that code running successfully under all these contingencies is far more valuable to society than code that chooses cleverness over safety. Since digital is inherently brittle, code must be robust. Simpler code is easier to secure than complex.

Post Navigation

Follow

Get every new post delivered to your Inbox.

Join 37 other followers