Digital Minefield

Why The Machines Are Winning

In Memoriam Lee Frank

The author of this blog, Lee Frank, passed away November 6, 2015. See his website LeeFrank.com for more of his thoughts and ideas as well as more about him.

The Software Did It!


Up to now, computers have been blamed only for unintentional failures. Crashes have led the list of computer malfunctions, signified by, “the computer is down.” But these too were unplanned, accidental events.

Now, after years of investigation in many countries, Volkswagen has finally come clean about their dirty diesels. More than merely admitting guilt, they pointed their corporate finger at the actual culprit: the software did it.

Not only acknowledging that the behavior was not accidental but intentionally criminal, they went public, throwing the offending software under the worldwide VW bus.

In one sense, the biggest aspect of this story was the further confirmation that the biggest companies feel compelled to cheat. Are they so insecure, or do they think they’re too big to be caught?

The most surprising aspect to the story for most people is learning just how sophisticated software can be when you want it to cheat. These engines ran clean only when they were being tested for clean emissions.

The strangest aspect to the story is what made them think they could get away with it? As long as testing was required, the software would cheat. Did they not think that eventually it would be found out?

Or did they think they were just buying time? That they could eventually find a fix—in hardware or software—that would produce clean engines? And then replace the cheating software?

The most disturbing aspect to the story is realizing that software is sophisticated beyond our imagination. What are the odds that such cheating can be detected, much less brought to justice?

The most obvious aspect to the story is why do they test in such a way that programs can cheat? Is there no method equivalent to the random drug test for humans? Which brings up another set of questions.

When will we see nano-bots doing that drug testing? Then, how long before someone creates software to cheat on its programming? And the obvious final question, how do we test the testers and their tools, i.e., their testing software?

Advertisement

Spies Like Us


Not too long ago, I suggested drones were out of control. I had no idea. Saw an ad on TV last week for the High Spy Drone. You can continue reading this post or go to their site.

If you’ve seen the video and paid attention, you heard the announcer say “… spy on your neighbors.” Yes, folks, for the small price of only two payments of $19.95 (plus S & H), you too can invade the privacy of anyone living next door.

Why stop there. You can take it on vacation (as the ad suggests) and spy on total strangers. Why not pull up outside a house with a pool and spy on the sunbathers? (Just be prepared to abandon the drone and made a quick getaway.)

Okay, so it’s just a toy (less than a foot square). And it’s likely that the batteries won’t keep it airborne for its full 75 minutes of video. But for that low, low price you get TWO high spy drones.

Like I said, it’s just a toy and you couldn’t add an ounce of payload to cause any real (physical) damage. The damage will depend on the pictures you take and what you do with them.

Speaking of pictures, the ad says the device’s range is 160 feet. So if you want video from 50 feet high, you can be up to 152 feet away from your target. You might even be able to spy on people two houses away.

Speaking of ads, I didn’t catch the product name in the ad and my online search failed the first day. The next day I came up with a better search phrase, “spy drone tv ad,” and wondered if anyone had a site for things seen on TV.

They did. It’s called ispot.tv and here’s the link for the high spy drone. However, this site is not just for toys. It’s for “Real-Time TV Advertising Metrics.” It’s an actual tool for media planning, ad effectiveness, and competitive analysis.

Interesting, but I digress. The issue here, the non-toy, non-joke concern is privacy. The fruits of the ever-shrinking world of digital are just beginning to appear. The technology that stabilizes this drone is very high-tech—and getting smaller.

As for privacy, the odds are against us. Since it wasn’t mentioned explicitly in the Constitution, government is slow to derive, and enforce, privacy rights. It’s not much help when it comes to electronic invasion, so why expect any when it comes to physical spying?

If you find your airspace being invaded by a spy drone, I don’t recommend the family shotgun. Instead, I’d get a T-shirt gun, like they use at concerts. For ammo, you’ll need a net with small weights in the corners. Maybe you’ll see it advertised on TV.

Oh, The Humanities!


In 1959, British scientist and novelist C. P. Snow gave a lecture titled “The Two Cultures.” He said British education was split into Science and the Humanities—and that the latter saw itself as superior.

His critique claimed the essentials of the Humanities were the central pillar of British education, and that most educated Brits thought the essentials of Science as little more than incidental.

The lecture became a widely read book, then a widely discussed controversy, and even another follow-up book. In less than sixty years, it is not only forgotten but the tables have completely turned.

Today not only is Science king, very few people (even outside of Science or Technology) see any value whatsoever in the Humanities. As I said in my post of July 20, 2015, “The nerds have won.”

However, having everything your own way is rarely the path to victory. I could just mention the name Midas, or point to the endless stories and fables meant to teach us a little wisdom.

I could give endless examples of how our technology could be improved by adding a human element. There are many related to programming in this blog. However, the big picture are the robots that will be built on the assumptions of artificial intelligence.

The intelligence sought by AI is abstract. AI scientists don’t see the distinct value of human intelligence. They think that somehow a machine can make decisions or solve problems without a concept of self, without consciousness—without empathy for humans.

Empathy is exactly what our current technology lacks. It can be learned directly from experience or indirectly from education. But it can only be learned, directly or indirectly, from humans.

Intelligence without empathy is merely data. How many times have you heard the phrase “thinking outside the box”? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Using imagination is box-free thinking.

Wikipedia defines “[I]ntelligence … [as] logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.” Yet, without imagination, all of these are useless.

Imagining how humans will respond is necessary for human-friendly technology. If we can apply our humanity, we can empathize how people will feel using a technological product or device. We can, if our science is balanced by our humanity.

Robo-Management


Once upon a time (preemptive pun), there was a genius named Frederick Winslow Taylor. Equipped with a clipboard and stopwatch, he revolutionized office and manufacturing procedures in the early part of the last century. (The Principles of Scientific Management, 1911.)

I learned this as a young teen by reading Cheaper By The Dozen. The book was about applying time-study methods to life with the twelve children of the husband and wife efficiency team of Frank and Lillian Gilbreth. (The 1950 movie starred Clifton Webb; the remake in 2003 starred Steve Martin.)

Fifteen years later, I learned of another goal, effectiveness, from the top management guru, Peter Drucker. Taylor preached efficiency, but effectiveness was more important. Yet, many organizations prefer efficiency over effectiveness.

In Taylor’s day, efficiency was symbolized by the stopwatch. Today’s efficiency is a quantity that can be measured more accurately by computers. Effectiveness is a quality determined by humans making value judgments.

Efficiency is easy to measure; it’s what is happening now. It’s harder to measure tomorrow’s consequences of today’s actions. Effectiveness is about judging consequences. It requires humans to make those judgments. Efficiency can be reduced to numbers churned out by computers.

Computer numbers are easy to acquire, super-fast to calculate, and can be analyzed a million different ways. The human judgments necessary for effectiveness are hard to acquire, slow to evaluate, and difficult to analyze.

In discussing the woes of modern workers, two companies are manipulative in the extreme: Walmart and Amazon. Their success is built on the diminution of human margins.

Is it any wonder that companies like these are using the computer as a modern stopwatch? In the name of efficiency, they’re pushing their workers to act like machines. To what end?

Using Taylor’s Scientific Management, companies are reshaping human jobs to better fit the robot workers of tomorrow. You could say the jobs are being tailored to suit the robots. (Begin with a pun; end with a pun.)

Social Media’s Biggest Lie


This time it was the news that made the news. This time, instead of hearing about the killer’s social media from an investigation, we heard it in real time from the killer himself. He made social media the very essence of his crime.

I wondered how a psychopath had a social media network. Then it all came back to me. News reports of senseless killings over many decades. And how, until now in the age of social media, all those killers were described as “loners.”

Maybe it began with Columbine (April 20, 1999). Although the influence of social media wasn’t as obvious there because it was a shared psychosis and seen as an extreme folie à deux. Maybe, but they were loners.

However, since Columbine, the extensive usage of social media has been the common element the news has given us in lieu of the more cryptic term, loner. Yet, for all this data we have learned nothing about how these disturbed people became out-and-out psychopaths.

Instead, we are left with a pile of meaningless social media connections. As though there was some understanding of the actions of these psychopaths that could be gained by exploring their social media movements.

Far too many people seem unaware that we become human only though interaction with other humans. This interaction is not only what makes us human, it’s what keeps us human.

It would also seem that most people are unable to distinguish the unreality of social media’s virtual interactions from actual face-to-face, one-on-one human interaction. The news media acts as though social media gives loners real connections.

What nonsense! It’s their actions, not their social media connections that identifies people as loners. It’s their lack of real human interactions that labels them. But what is real for such disturbed people?

They each have their own reality. The rest of the world calls it virtual but that has no effect since the disturbed think it’s real—just as they believe their grievances justify the use of weapons.

Social media is “… the illusion of companionship without the demands of friendship.” —Sherry Turkle, Alone Together: Why we experience more from technology and less from each other.

Disturbed people without real friends are as likely to harm themselves as others. Using social media to deceive ourselves into thinking virtual is actual human contact will end in disaster.

We could avoid some future disasters if we remove the possibility of interrupting live broadcasts. This seven-second or profanity delay has been available for decades.

The NRA’s big lie is anyone can own a gun without any need for proper training. That is the same as saying any idiot can use one, which turns out to be true. Using it correctly is another story.

Social media’s biggest lie is that virtual friends can help with real problems. Guns don’t solve personal problems, people do. That is, real people, not virtual people.

Technology’s Fatal Flaw


Two ideas came to me last week and I struggled with them until I realized they were both the same idea, just expressed differently. For this post in Digital Minefield, it is expressed as “Technology’s Fatal Flaw.” For my post in Pelf and Weal, it is called “Gambler’s Paradise.”

Daily in the news we hear of technological failures. Only the problems are attributed to separate specific sources, like drones and abandoned mines. No one sees the risk-taking of technology as the common element.

It’s easier to blame technology, that is new technology, for society’s inability to control drones. It’s not so easy to see that exactly the same moral approach has lead to a quarter of a million abandoned mines here in the US.

Where do we draw the line between scientific experimentation and technological innovation? In the eighteenth century, chemists rushed to discover new elements. Often they did so by smelling the results of chemical reactions. It killed some of them.

Many, however, got rich. In England, the best were made peers of the realm. Most were not simply chemists but also inventors, lecturers, and famous authors. We remember the successful ones and forget the risks they took.

No one has forgotten the risks taken by Marie Curie. The radioactivity she discovered—and that killed her—made her famous in her lifetime (two Nobel prizes). We forget such risk-taking was the norm.

Most of the risk-taking in the days of get-rich-quick mining centered around success or failure. Less discussed were the actual physical dangers. Never mentioned were the costs to posterity.

This was true for the precursors of the chemists, the alchemists, so it remains true for their modern day atomic wizards. Society has committed to the risk of nuclear reactors without any viable solution for its extraordinarily dangerous waste product, plutonium—deadly for 25,000 years.

It is obvious that any new technology (and science) has always been ahead of laws to regulate it. By definition, if it’s really new, how could there be laws in place to deal with it? We have no answer, because we are technology’s fatal flaw.

Who’s In Control?


I’ve written a lot lately about autonomous vehicles, weapons, etc. In the news right now are remote-controlled drones interfering with California fire fighters. What’s the connection? Whether you’re on the annoying end of self-driving cars or a human-driven drones, it’s all the same.

What’s my point? When it comes to laws prohibiting or regulating their actions, devices must be treated based on their actions and capabilities. The state of their autonomy has nothing to do with the case.

This is also true when it comes to finding the culprit. If a device breaks the law (or a regulation) then the responsible party must pay the penalty. If the device is autonomous, it will be hard to determine who sent it on its independent mission.

In other words, before we can have any kind of autonomous device, we need enforceable laws to connect the intent of the person controlling the device to its actions. As you might imagine, this will be more difficult than identifying a person with a drone.

Wait a minute! Can we even do that? If a drone can be connected to a controller device—and then to the person using that device—then why are California fire fighters having these problems?

It seems implausible the drone controller could possibly control more than one drone. However, instead of a unique identifier between each drone and its controller, suppose the manufacturer uses only a hundred unique identifiers for the thousands of drones it makes. Or maybe only a dozen.

In as much as the drone buyers do not have to register the identifier (nor is there a law requiring sellers to keep such records), the only way an errant drone could be prosecuted would be to get its identifier and find its controller.

The latter task requires searching an area whose radius is the maximum control distance for this model. Assuming the drone owner is stupid enough to keep the controller after the drone didn’t come back. Assuming the drone owner was operating from a house and not a car.

Without a complete registry of totally unique drone and controller ids, these devices are free to fly wherever the owner wants. Unlike a gun that identifies the bullets it shoots, a drone controller can’t be traced.

These rabbits have clearly left Pandora’s hat. Short of declaring all existing drones illegal (i.e., no totally unique identifier), there is no way for society to control the use of these devices.

However, we have the opportunity to pass such laws for autonomous devices not yet on the market. The real question is: Does society have the will? I doubt it, since it’s not too late to redo the drones and I see no inclination to do so.

Who would have thought that a technology as innocuous as toy drones could expand into total social chaos? As for banning of autonomous weapons, the military will resist ids. And I can see the NRA in the wings salivating at the chance to put in its two cents.

Worst Idea, Part Two


There are so many things wrong with the idea of autonomous weapons, it’s hard to know where the list ends. For example, take every bad news story involving guns, drones, or even high-speed chases, and add AI. That future is chaos.

Drones interfering with fire-fighting planes in California is just a beginning. Soon the news will be filled with more drones in more situations generating more chaos. AI is just itching to get control of drones.

If a weapon is truly autonomous, won’t it be able to determine its own targets? If not, then how can all its possible targets be programmed in advance? Either method of targeting is risky.

Will such weapons have defensive capabilities? Given what they will cost, I’m sure their designers will build-in whatever defenses they consider sufficient to carry out the mission.

How much of that defense will be directed at deceiving computer systems? How much to deceive humans? Think transformers. Not the gigantic CGI silliness of the movies, but smaller, unobtrusive objects—like a London phone booth.

Deceptions are only one part of the AI puzzle. Can the designers guarantee any autonomous weapon will be unhackable? And if not hackable, are they protected against simple sabotage?

To put this in another context: If the device has a mind, it can be changed. And if it’s changed in ways not detectable by its makers, it will wreak havoc before it can be destroyed.

Autonomous weapons are just another step in technology’s climb to superiority. But we already have overwhelming weapons superiority—and it doesn’t bring victory, or even peace of mind.

We are currently engaged with an enemy, IS, where we have an enormous technological advantage. Yet, we no strategic advantage and the outcome is unpredictable. How will more technology help?

Who really thinks that if our weapons don’t risk lives on a battlefield, the enemy will do likewise? We’re already struggling with a relative handful of terrorists, whose primary targets are humans.

The bottom line in the use of autonomous weapons is their offensive use cannot stop the enemy from targeting our civilians. Autonomous weapons can’t prevent the random acts of terrorism we now encounter on our home soil.

Unless some AI genius decides autonomous weapons should be employed in defending our civilians. Remember, In the first RoboCop movie, the huge crime-fighting robot (ED-209) that went berserk? Will that fictional past become our real future?

Worst Idea Ever


I assume by now you’ve heard about the ban on AI weapons proposed in a letter signed by over 1000 AI experts, including Elon Musk, Steve Wozniak, and Stephen Hawking. The letter was presented last week at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The rest of us have been occupied with threats of various kinds of autonomous vehicles (cars, trucks, drones), and we failed to see what was right around the corner. Autonomous weapons are more than battlefield robot-soldiers (unlikely) or gun-toting drones (likely). And weapons are more than guns.

Robots have no qualms about sacrificing themselves. It’s a different kind of warfare when the weapons deliverer is the weapon. It’s kamikaze on steroids.

Only now do I realize, after writing a dozen posts about autonomous autos, that they and their ilk are merely stalking horses for the worst idea ever. Autos were just the beginning. Add long haul trucks, obviously, because they’re already being tested.

How about local trucks? Post Office? UPS? How about city buses? School buses? Don’t forget Amazon’s drones. Can larger autonomous drones replace helicopters? For news? Police surveillance? Medivacs?

Look at it another way. Instead of using robots to replace humans, autonomous vehicles are replacing the entire job, e.g., driving a truck. The idea behind all these autonomous devices is to get the public used to the concept, so they won’t question any of them, even those that go way over the top.

Aside from giving the people in control more control using the leverage of computers, there’s the general degradation of the populace by making them less valued than the robots that replace them.

How did humans come to this insane position? Here’s how. People who control the machines think not only are they so much smarter than other people (e.g., the ones they want to replace with robots), they think they can make computers smarter than other people. This is the AI they seek.

And there are some so enamored of intelligence in any form that if they succeed at making a superhuman artificial intelligence—one even smarter than themselves—they will bow down and worship it. Even as it destroys them.

Post Navigation