Make us your home page

Could robots take over the world?

It feels a bit like the End Times these days, what with assault rifles flying off the shelves and the markets swinging and the honeybees dying and the national deficit growing and the polar ice caps melting and wars raging and the missionaries from the panhandle cults filling our mailboxes with doomsday lit. The beast may be coming, and we wonder if we'll recognize it.

And all this at a time when we're weak. We're complacent. Few of us have ever gone hungry. Wars play out too far away for us to hear any actual gunfire. Hardly anyone really works anymore, except maybe chimney sweeps (seasonal) and plumbers. We have mastered the art of pretending to work, of minimizing windows when we hear the swoosh of footsteps outside the cubicle.

That's why the coming robot takeover is so scary.

• • •

A peek into the future:

Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight.

Science fiction?

The prediction comes in a new book, Moral Machines: Teaching Robots Right from Wrong, by Wendell Wallach, a consultant and writer in Yale's Interdisciplinary Center for Bioethics, and Colin Allen, professor of cognitive science and history and philosophy of science at Indiana University.

"Robots and computer systems are all over the place and making all kinds of decisions," Wallach says on the phone. "And the machines are getting more and more autonomous."

The introduction to the book includes a plausible doomsday scenario that features widespread blackouts, computer-caused stock market fluctuations, a plane crash and automated machine guns firing on tourists at an Arizona border crossing.

Wallach says the book is not intended to scare people, but to make the point that we should be talking about robot morality, before it's too late.

Never have we been more dependent on machines.

Computers run our trains, give us directions (GPS), accept and dispense our money (ATM), entertain our children (Tickle Me Elmo), vacuum our floors (Roomba). Robotic arms build our cars and perform delicate surgeries. Soon, robotic assistants will be helping the elderly and incapacitated with chores around the house. The Japanese are close to building robots that appear indistinguishable from humans.

A robot designed by David Hanson, president of Dallas-based Hanson Robotics, looks like Albert Einstein and can hold a conversation accompanied by hand gestures. According to Time, the robot understands 130,000 words. ("This moment is the Kitty Hawk of androids," Hanson told the magazine in January 2007. "We're seeing the arrival of conversational robots that can walk in our world. It's a golden age of invention.")

We have deployed robots to Iraq and Afghanistan for surveillance and to dismantle roadside bombs. Unmanned Predator drones used by the military have the ability to strike. Warring robots built by the same company that makes the Roomba can be fitted with machine guns.

By the end of 2007, 6.5-million robots were in operation worldwide, according to the International Federation of Robotics. By 2011, the group predicts, more than 18-million robots will populate the Earth.

And never have machines had as much autonomy. Try calling your cable company after hours.

So it's easy to take the next logical step. Something bad is going to happen between us and the robots.

• • •

They're already armed.

Wallach points to an incident reported by Wired magazine, in which a semiautonomous robotic cannon went haywire recently and shot and killed nine South African soldiers.

There are still questions about whether the hardware or software malfunctioned, but the lesson was there: Think twice before you give a robot a gun.

"If they aren't restricted," Wallach says, "these machines are going to be dangerous."

So how do you teach morals to a robot? And with whose morals do you program them?

Do you program them with rules such as the Ten Commandments or principles such as the Golden Rule? Educate robots like children, so they learn that their actions have consequences?

Do we even want computers making ethical decisions?

"We're trying to set a framework for people to begin thinking about this in a broad way," Wallach says.


In an essay in the New York Times last month, Richard Dooling, author of Rapture for the Geeks: When A.I. Outsmarts I.Q., suggests we may be closer to the robo-apocalypse than we think.

Look at the financial markets.

He writes that the best and brightest quantitative analysts Wall Street could afford "fed $1-trillion in sub-prime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and — poof! — created $62-trillion in imaginary wealth."

The problem is that these derivatives are too complex for anyone to understand and are therefore unregulated by humans.

"That left nobody but the machines in charge," Dooling wrote.

• • •

But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. . . . Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Theodore Kaczinski, the Unabomber

• • •

"We are living, we have long been told, in the Information Age. Yet now we are faced with the sickening suspicion that technology has run ahead of us," Drooling wrote in the New York Times. "Man is a fire-stealing animal, and we can't help building machines and machine intelligences, even if, from time to time, we use them not only to outsmart ourselves but to bring us right up to the doorstep of Doom."

Then again, these End Times could be just another End Times.

In late October 1987, another market drop was upon us, and a Washington Post reporter went in search of reaction to the spasms of that day.

"What is frightening is that the computers have so much to do with it," said Lady Acland, wife of Sir Antony, the British ambassador. "One feels one's almost gotten in the hands of a robot."

Ben Montgomery can be reached at [email protected] or (727) 893-8650.



Yul Brynner plays a nightmarish android cowboy who stalks humans through a futuristic theme park called Westworld.


She served the Jetsons hot coffee from her bosom and short-circuited when she shed tears.


The astromech droid from Star Wars was inducted into the Robot Hall of Fame in 2003.

HAL 9000

Heuristically programmed ALgorithmic computer on the spaceship Discovery in Arthur C. Clarke's Space Odyssey saga.


Love-starved trash bot assigned to clean up Earth in the 2008 Disney/Pixar film of the same name.


Officer Alex Murphy is killed by a Detroit gang and is brought back as part man, part machine, all cop.

Johnny 5

Sentient robot bent on peace in the 1986 movie Short Circuit.

On the Web Meet some of the world's coolest robots — Jules, Einstein and BigDog — at

the book

Moral Machines: Teaching Robots Right from Wrong

By Wendell Wallach and Colin Allen

Oxford University Press, 288 pages, $29.95

Could robots take over the world? 11/26/08 [Last modified: Monday, December 1, 2008 1:22pm]
Photo reprints | Article reprints

© 2017 Tampa Bay Times


Join the discussion: Click to view comments, add yours