Digital Minefield

Why The Machines Are Winning

Archive for the tag “artificial intelligence”

Oh, The Humanities!

In 1959, British scientist and novelist C. P. Snow gave a lecture titled “The Two Cultures.” He said British education was split into Science and the Humanities—and that the latter saw itself as superior.

His critique claimed the essentials of the Humanities were the central pillar of British education, and that most educated Brits thought the essentials of Science as little more than incidental.

The lecture became a widely read book, then a widely discussed controversy, and even another follow-up book. In less than sixty years, it is not only forgotten but the tables have completely turned.

Today not only is Science king, very few people (even outside of Science or Technology) see any value whatsoever in the Humanities. As I said in my post of July 20, 2015, “The nerds have won.”

However, having everything your own way is rarely the path to victory. I could just mention the name Midas, or point to the endless stories and fables meant to teach us a little wisdom.

I could give endless examples of how our technology could be improved by adding a human element. There are many related to programming in this blog. However, the big picture are the robots that will be built on the assumptions of artificial intelligence.

The intelligence sought by AI is abstract. AI scientists don’t see the distinct value of human intelligence. They think that somehow a machine can make decisions or solve problems without a concept of self, without consciousness—without empathy for humans.

Empathy is exactly what our current technology lacks. It can be learned directly from experience or indirectly from education. But it can only be learned, directly or indirectly, from humans.

Intelligence without empathy is merely data. How many times have you heard the phrase “thinking outside the box”? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Using imagination is box-free thinking.

Wikipedia defines “[I]ntelligence … [as] logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.” Yet, without imagination, all of these are useless.

Imagining how humans will respond is necessary for human-friendly technology. If we can apply our humanity, we can empathize how people will feel using a technological product or device. We can, if our science is balanced by our humanity.

Worst Idea Ever

I assume by now you’ve heard about the ban on AI weapons proposed in a letter signed by over 1000 AI experts, including Elon Musk, Steve Wozniak, and Stephen Hawking. The letter was presented last week at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The rest of us have been occupied with threats of various kinds of autonomous vehicles (cars, trucks, drones), and we failed to see what was right around the corner. Autonomous weapons are more than battlefield robot-soldiers (unlikely) or gun-toting drones (likely). And weapons are more than guns.

Robots have no qualms about sacrificing themselves. It’s a different kind of warfare when the weapons deliverer is the weapon. It’s kamikaze on steroids.

Only now do I realize, after writing a dozen posts about autonomous autos, that they and their ilk are merely stalking horses for the worst idea ever. Autos were just the beginning. Add long haul trucks, obviously, because they’re already being tested.

How about local trucks? Post Office? UPS? How about city buses? School buses? Don’t forget Amazon’s drones. Can larger autonomous drones replace helicopters? For news? Police surveillance? Medivacs?

Look at it another way. Instead of using robots to replace humans, autonomous vehicles are replacing the entire job, e.g., driving a truck. The idea behind all these autonomous devices is to get the public used to the concept, so they won’t question any of them, even those that go way over the top.

Aside from giving the people in control more control using the leverage of computers, there’s the general degradation of the populace by making them less valued than the robots that replace them.

How did humans come to this insane position? Here’s how. People who control the machines think not only are they so much smarter than other people (e.g., the ones they want to replace with robots), they think they can make computers smarter than other people. This is the AI they seek.

And there are some so enamored of intelligence in any form that if they succeed at making a superhuman artificial intelligence—one even smarter than themselves—they will bow down and worship it. Even as it destroys them.

Romancing the Bot, Part 2

Movie robots don’t have to be sexual objects to be considered sexy. Take Fritz Lang’s Metropolis (1927). A robot is created with the face of the heroine, Maria. Although built for evil, the robot is also erotic, with a Madonna-like armor-plated look.

There were plenty of hot bots in Westworld (1973), Michael Crichton’s view of a future Disney-like adult playground. If you have the technology, sex bots aren’t just for vacations. The Stepford Wives (1975) replaced everyday spouses with robots.

In 1982, Blade Runner gave us replicants, genetically engineered organic robots. (I guess “clone” wasn’t popular when Philip K. Dick wrote Do Androids Dream of Electric Sheep? in 1968.) But the movie did give a Pleasure Unit named Pris.

Arguably, the most human of the screen’s artificial creatures was Data of Star Trek: The Next Generation. I submit the believability of this character owed more to the scripts and Brent Spiner’s acting than to any plausible future science.

Data’s technology is straight from Isaac Asimov’s 1950’s science-fiction, not current research in Artificial Intelligence. Data not only has Asimov’s imaginative but undefined “positronic” brain, he obeys Asimov’s Three Laws of Robotics.

Why was Data based on old fiction rather than current state-of-the-art AI? Because in the late 80s, AI had nothing better to offer as a basis for a believable future android. Nor does it now.

Artificial Intelligence is still struggling with machine intelligence distinct from human intelligence. The latter requires a self, consciousness, and emotion among other qualities.

Intelligence is meaningless without decision-making. Yet AI fails to grasp that human decisions depend upon meaning and meaning does not exist in a vacuum, i.e., without emotion.

If we seek the equality of other, even artificial, creatures, intelligence is only a starting point. To get from there to a shared morality is a formidable task. Can fiction offer any shortcuts?

In one episode, Picard argues Data’s right to self-determination is not limited by being a machine. Surely Data has more rights than a toaster, but how—and where—do we draw the line?

Data is human enough for sex, rights, and fatherhood. As to how human he is, I’ll let Shylock answer: He may have hands and dimensions, but no organs, nor will he bleed. Grant him senses, but affections? Passions? And if we tickle him, will he laugh?

Romancing the Bot

You probably missed it, but back in 2008 International Chessmaster David Levy wrote a book titled Love and Sex with Robots. Not just a chess expert, Wikipedia says Levy has written over 40 books, mostly on computer chess.

I missed it, probably because even though I was heavily into the links between computing, consciousness, artificial intelligence, robots, and mind/brain, I saw nothing of his book or work. (Or even his 2005 book, Robots Unlimited: Life in a Virtual Age).

When I learned of this book last week, I wondered why I hadn’t heard of it in any book I’d read. Its Amazon page gave no references to any current work in the above-mentioned fields.

Last week was when I began Sherry Turkle’s Alone Together (2011) and found mention of Levy’s book. Since I was posting about real versus virtual, why not add human and non-human?

I’ve known sex robots were around the corner, but love? Is he kidding? Anyway, these sex devices are much closer to their blow-up predecessors than anything human-like. Or they were.

The makers of “the world’s first sex robot,” Roxxxy, say it’s no more than a “life-size rubber doll . . . designed to engage the owner with conversation rather than lifelike movement.”

Introduced at the Adult Entertainment Expo in Las Vegas this month, for less than ten large, Roxxy sounds far less satisfactory than the eponymous Her, of the new Spike Jonze movie.

No robot or real woman could compete with the fictional, idealized Her. This is a computer not only “designed to meet his every need,” but one that is also “human and intuitive.”

Human and intuitive software? Why put it in anything less than artificial humans? For “companions” you can touch and vice versa, see 1987’s Cherry 2000. So why talk with a computer?

The real question for us is, are we becoming less human by romancing the bot? Every year, we retreat further from the world. Can we continue and not become slaves of the machine?

Oh, did I forget to mention that David Levy believes these robots should not only be loved and laid but married! If he has his way, what’s next? Toasters going on strike for their rights?

Is This Smart?

Today I asked a very smart piece of software, a very (I thought) simple question: how many bicycles are there in the world? The software was Wolfram Alpha (available free at I quote the answer: “Wolfram|Alpha doesn’t understand your query[.]” Really? Is this truly the sad state of the art of artificial intelligence?

Slightly incredulous, I rephrased the question: In the world, how many bicycles are there? And I got this choice: Olympic medals, world cycling, and number of medals. Really? But before I close the lid and get out the nails, I should explain why I was there and why I was asking.

I went to Wolfram|Alpha (as it calls itself) to see if its Kindle app was worth $3.99. I also went out of curiosity, since it had been on my horizon for a few years. Claimed to be better (!) than search engines, it is “a knowledge base of curated, structured data.” —Wikipedia

I was also curious to see the latest from the special mind of Stephan Wolfram, a former wunderkind (Ph.D. from CalTech at 20). A few years ago, I had fumbled through all 1192 pages of his book, A New Kind of Science. (I won’t try to explain it.)

So much for why I was there. The reason I was asking was because I was sure there were more bicycles than cars (in the world). Any search engine will tell you there are about twice as many bicycles than cars (and bicycles out produce cars by 5 to 2).

I was also asking as a follow-up to last week’s post about haves and have-nots. How’s this for ubiquity: there is at least one transistor radio for every person on the planet. Cell phones come close, with about six billion (but those figures are probably inflated). If you doubt the numbers, all you need to know is that China is the foremost producer of transistor radios and bicycles.

Like the radio, initially the phone was a shared device. Like today’s transistor radio, today’s phone might be shared but it’s clearly designed to be a personal device—your contacts, your apps, your text messages, etc. Even more personal is the smart phone; many people say it’s their life. But there are only a billion of them. The haves, as they have always been, are in the minority. The have-nots, in poor societies, still listen to transistor radios. Few have individual cell phones, never mind smart phones.

Post Navigation