Human beings have always built machines, and they have always feared their own creations. And so it is with Artificial Intelligence. AI is not new. The current excitement mingled with fear surrounding it happened once before, in the 1960s, shortly after its invention. At the time, experts of all sorts predicted that a revolution of machine intelligence was fast approaching. It wasn’t. People lost interest in that technology as the expectations foundered. Neither the over-excitement nor the disregard that followed were fair. It was purely emotional.
But Artificial Intelligence actually differs from previous human creations. It does not simply surpass human physical capacities as did mechanical inventions such as the pedal loom, the printing press or the car, to cite just a few. It challenges humans in our preeminent field: thinking. We consider, with some good reason, that higher-level intelligence is a unique character of mankind. The possibility of losing this superlative quality frightens us.
However, Artificial Intelligence is not about building intelligent machines. It is about having machines do things that would require intelligence if done by humans. The difference might appear subtle, but it is not. There is a huge gap between thinking and computing. Edsger Dijkstra, one of the most famous computer scientists, compared this question -- Can a machine think? -- to another question -- Can a submarine swim? Submarines go underwater and they do it very well. But they do not swim.
The is no doubt about it: Artificial Intelligence systems will certainly surpass us in many domains. Artificial Intelligence already beats us at most strategy games.
When we drive, it can control our trips via navigation assistance, keep us from drifting out of our lane or stop our car before we rear-end someone. On our smart phones, we even rely on it to select the best information for us on social networks. But all this is achieved using algorithms that were patiently built by computer scientists. What we consider as “intelligent” is actually a program with no more autonomy than a cash register.
There is, however, a difference between today’s revolution of AI and the previous one. It is the possibility of accessing huge amounts of data to design parameters for the programs, instead of having to implement decision rules by hand as was done in the 1980s. This does not mean that the machine learns by itself. The term “Machine Learning,” in a similar way to “Artificial Intelligence,” is largely misleading. The AI program simply builds its decision process by finding the most probable patterns in the data it is provided. It does this in such a way that no human could ever do it. And the result is often so complex that no human can understand it. Still, this is neither learning, nor thinking. This is computation.
We are afraid of what we do not understand. Understanding how AI actually works is thus a priority, so that the society can decide, in an informed way, what to do with this technology.
Nicolas Sabouret, a professor of computer science at the University of Paris-Saclay, France, is in St. Petersburg for a mini conference on artificial intelligence hosted by the St. Petersburg Conference on World Affairs.