Digital Minefield

Why The Machines Are Winning

Worst Idea, Part Two


There are so many things wrong with the idea of autonomous weapons, it’s hard to know where the list ends. For example, take every bad news story involving guns, drones, or even high-speed chases, and add AI. That future is chaos.

Drones interfering with fire-fighting planes in California is just a beginning. Soon the news will be filled with more drones in more situations generating more chaos. AI is just itching to get control of drones.

If a weapon is truly autonomous, won’t it be able to determine its own targets? If not, then how can all its possible targets be programmed in advance? Either method of targeting is risky.

Will such weapons have defensive capabilities? Given what they will cost, I’m sure their designers will build-in whatever defenses they consider sufficient to carry out the mission.

How much of that defense will be directed at deceiving computer systems? How much to deceive humans? Think transformers. Not the gigantic CGI silliness of the movies, but smaller, unobtrusive objects—like a London phone booth.

Deceptions are only one part of the AI puzzle. Can the designers guarantee any autonomous weapon will be unhackable? And if not hackable, are they protected against simple sabotage?

To put this in another context: If the device has a mind, it can be changed. And if it’s changed in ways not detectable by its makers, it will wreak havoc before it can be destroyed.

Autonomous weapons are just another step in technology’s climb to superiority. But we already have overwhelming weapons superiority—and it doesn’t bring victory, or even peace of mind.

We are currently engaged with an enemy, IS, where we have an enormous technological advantage. Yet, we no strategic advantage and the outcome is unpredictable. How will more technology help?

Who really thinks that if our weapons don’t risk lives on a battlefield, the enemy will do likewise? We’re already struggling with a relative handful of terrorists, whose primary targets are humans.

The bottom line in the use of autonomous weapons is their offensive use cannot stop the enemy from targeting our civilians. Autonomous weapons can’t prevent the random acts of terrorism we now encounter on our home soil.

Unless some AI genius decides autonomous weapons should be employed in defending our civilians. Remember, In the first RoboCop movie, the huge crime-fighting robot (ED-209) that went berserk? Will that fictional past become our real future?

Advertisements

Single Post Navigation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: