Advertisement
Opinion
|
Guest Column
Trust me, Siri doesn’t want to kill you | Column
Frankenstein’s lessons for Artificial Intelligence may not be what you think.
"Frankenstein," as interpreted by Hollywood.
"Frankenstein," as interpreted by Hollywood. [ UNIVERSAL | File photo ]
Published Nov. 25

As legend has it, a teenage Mary Shelley was vacationing with friends in a castle on Lake Geneva and frustrated that she could not respond to a late evening challenge to each in the group from Lord Byron to conjure up a truly terrifying horror story.

W. Russell Neuman
W. Russell Neuman [ Provided ]

Then a vision came to her several days later in a dream. That vision was ultimately published in 1818 as Frankenstein; or, The Modern Prometheus and an iconic and extremely influential cultural allegory was born — the monster of Dr. Frankenstein. The notion of a powerful but ill-conceived creation beyond the control of its inventor captivated the wider public at the dawn of the industrial revolution. It still does. Reportedly more than 500 editions of the novel are currently in print, and over 50,000 copies of the novel and its variants sell annually in the United States. Science as hubris. Invention run amok.

A present day Mary Shelley vacationing with friends in the Hamptons would probably respond to such a challenge with the story of an A.I. Monster that cleverly outwits all efforts to constrain it. In the original Igor grabs a brain from the wrong jar. Today Igor thoughtlessly writes some dangerous Artificial Intelligence code, which gets completely out of control. The machine-versus-humankind narrative underpins the Terminator and Matrix movie franchises among others and countless science fiction novels. We’ve gotten used to chatting comfortably with Siri. She seems friendly enough. Does she secretly want to kill us?

An angry A.I. robot may engage our imagination but the issue is taken as an entirely serious challenge by many computer scientists. They call it the alignment problem — how do we make sure the goals of the machines we build stay aligned with our goals and needs? Books with titles like Smarter Than Us; Our Final Invention and How to Survive a Robot Invasion warn of the existential threat. A journalist’s interview with thought leader and Oxford professor Nick Bostrom is headlined: “Artificial Intelligence May Doom the Human Race Within a Century.” Bostrom is famous for what he calls the paperclip problem.

Suppose we have an A.I. whose only goal is to make as many paper clips as possible. The A.I. will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the A.I. would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

I suppose we ought to forgive some professorial theatrics as a reasonable strategy to inspire public interest in a serious technical issue but the neglected flip side of this question deserves some attention as well. What if the reverse is true? What if A.I. actually sustains the human race by saving us from ourselves? Think of it as compensatory evolutionary intelligence.

It could be the next historical stage as our human capacities co-evolve with the technologies we create. The invention of the wheel made us more mobile. Machine power made us stronger. Telecommunication gave us the capacity to communicate over great distances. A.I.-based assistance to human intelligence will make us smarter by compensating for our hardwired and demonstrable patterns of systematic misperception. It always appears to us that “the other guy started it.” Our brains are not capable of evaluating risk situations correctly because we respond with different sensitivities to loss as opposed to gain.

Spend your days with Hayes

Spend your days with Hayes

Subscribe to our free Stephinitely newsletter

Columnist Stephanie Hayes will share thoughts, feelings and funny business with you every Monday.

You’re all signed up!

Want more of our free, weekly newsletters in your inbox? Let’s get started.

Explore all your options

As humans, we systematically misrepresent different types of probability. We are vaguely aware that it is more likely that we will be hit by lightning than win the lottery jackpot. But we wait in line to buy our lottery tickets. There are longer lines when the jackpot is bigger and with more tickets sold, the odds of having the one winning ticket are proportionately worse. Psychologists call this motivated reasoning. We’ve gotten very good at fooling ourselves.

We can computationally correct for these biases? Siri’s networked advisory intelligence will be available in wrist watches, smart glasses, ear buds and ultimately smart contact lenses. Perhaps she has already told you via your smart phone that the very product you are about to purchase is available for half that price down the street. Collectively she can contribute to more deliberative evaluations at the national level concerning international conflict or tariff wars.

Will we be able to design enhanced decision processes so that demonstrably helpful and well-informed advice is not simply ignored? Our survival may depend on it.

W. Russell Neuman is professor of Media Technology at New York University. His next book is “The Next Big Thing: Evolutionary Intelligence.”