Advertisement
Opinion
|
Guest Column
Is AI coming for your job? How does AI work, anyway?
“Machine learning” may not work the way you think. Here’s why AI can seem human but also hallucinate.
 
Text from the ChatGPT page of the OpenAI website. (AP Photo/Richard Drew, File)
Text from the ChatGPT page of the OpenAI website. (AP Photo/Richard Drew, File) [ RICHARD DREW | AP ]
Published Jan. 31

Editor’s note: The St. Petersburg Conference on World Affairs has brought together diplomats, military, media, scientists and experts for more than a decade to work together at better understanding and operating in the world. It will be held at the University of South Florida St. Petersburg Student Center from Feb. 6-8. This column is by a participant. This year’s theme is “Rethinking.” Find out more here.

Last week, my friend asked if he should worry that artificial intelligence will take over his job. This is the most common question I get when I talk about this evolving technology. To answer requires first knowing a few basic facts about AI.

Nicolas Sabouret
Nicolas Sabouret [ Provided ]

AI is a field of computer science dedicated to having machines perform tasks that typically require human intelligence. This is no small feat, as every aspect of the task must be based on numbers and computations, from the user’s request to actually performing that task.

To overcome this complexity, AI researchers have developed a powerful method called “machine learning.” Instead of directly programming humanlike reasoning into the system, large amounts of data are used to set the algorithm’s parameters automatically. This works very well in many application fields, such as natural language processing (to have machines understand, interpret and generate human language), computer vision (image processing and generation) and recommender systems (finding the most relevant information), to name a few.

For the nonspecialist, all this can seem a bit magical. But I assure you it isn’t as complex as you might think. You don’t need a precise understanding of the programming behind all this, just as you don’t need an engineering degree to understand how to drive a car. But having a little bit of knowledge helps. I can demystify AI models to give people an understanding of how they work.

So, let’s talk about the AI stars of the moment: These are generative models, the most well-known being ChatGPT. To generate content similar to what humans produce, AI systems rely on a type of algorithm called a deep neural network. It has only a little to do with the human brain: Instead of neurons, it is composed of a large number of weighted sums, which themselves are really just a collection of addition and multiplication calculations, each one being modified by some parameter (called “weight”).

To achieve a given task properly, one must set billions of parameters to the right value. This is done by the machine-learning algorithm, which explores a vast amount of data to find the best combination of values for each parameter. The process doesn’t have a technical term. Computer scientists just call it training. In the case of GPT, the network is trained to produce the most probable words that follow a given prompt. If you write “Who is Captain Ahab?” the most probable words to start the response would be “Ahab is,” most probably followed by “the main character of the book,” and then most likely ending with “Moby-Dick by Herman Melville.” All this was extracted from the thousands of billions of text found on the Internet and encoded into the network parameters.

Spend your days with Hayes

Spend your days with Hayes

Subscribe to our free Stephinitely newsletter

Columnist Stephanie Hayes will share thoughts, feelings and funny business with you every Monday.
Subscribers Only

You’re all signed up!

Want more of our free, weekly newsletters in your inbox? Let’s get started.

Explore all your options

ChatGPT’s ability to generate humanlike responses stems from its exposure to diverse linguistic structures, styles and contexts during training. It takes the grammar rules, semantics and contextual cues learned from content on the Internet to generate responses that are coherent and contextually appropriate. However, it is essential to remember that ChatGPT and similar models lack true understanding or consciousness because they rely solely on patterns learned from data.

They may produce incorrect responses without noticing. (This is just a series of symbols to them, with no meaning in it.) Computer scientists call this “hallucination.” It is simply that the system has no way to decide whether the text it generates contains true information or is just grammatically correct smooth talk. AI models cannot guarantee the exactitude of their results!

In the end, what AI researchers like me are trying to build is not an intelligent machine. We are developing algorithms to solve problems using computation that would require intelligence if performed by humans. But what exactly is “intelligence,” you might ask? In this case, it’s a topic for another time. In the meantime, see if you can figure out which sentences in the above text were actually written by ChatGPT.

Nicolas Sabouret is a professor at the University Paris-Saclay, France, and the director of the Graduate School of Computer Science. He is an expert in artificial intelligence. His research focuses on human behavior modeling and simulation, combining AI models with theories of human cognition. His book “Understanding Artificial Intelligence” (CRC Press, 2020), which targets the general audience, demystifies AI and makes it accessible to everyone.