Sentient chatbots, Douglas Hofstadter, and why general AI is still a long way off

IE sits down with leading AI expert Melanie Mitchell to talk the future of the field.
Eric James Beyer
Artificial Intelligence digital concept
Artificial Intelligence digital concept

Olemedia/iStock 

Melanie Mitchell is the Davis Professor of Complexity at Santa Fe Institute. She is the author and editor of six books, including Artificial Intelligence: A Guide For Thinking Humans, the author of numerous scholarly papers, and one of the world’s leading thinkers on artificial intelligence systems. 

Much of Mitchell’s current research centers on conceptual abstraction, analogy-making, and visual recognition in AI systems. She also created the Santa Fe Institute’s Complexity Explorer platform, which provides online courses and resources related to the discipline of complex systems. Her own course on that platform, “Introduction to Complexity,” has been taken by over 25,000 students and is one of Course Central’s top 50 online courses of all time. 

Sentient chatbots, Douglas Hofstadter, and why general AI is still a long way off
via Melanie Mitchell

In both her writing and speaking on AI systems, Mitchell displays a razor-like focus that allows her to explain the core concepts that underpin those systems, even the more technically dense ones. Her book, Artificial Intelligence: A Guide For Thinking Humans, is neither a prohibitively dense and jargon-laden tome nor a breezy walk in the park. 

Adept at addressing both the mechanical nature and philosophical implications of AI systems, Mitchell represents the best aspects of scientific inquiry — she is both incisively skeptical and unpretentiously open to new ideas and possibilities should the evidence present itself.

Mitchell recently sat down with Interesting Engineering to talk about her background in the field, the influence of Douglas Hofstadter, why Google’s AI chatbot isn’t sentient, and what it would take for humanity to truly realize general AI. 

The following conversation has been lightly edited for clarity and flow. 

Interesting Engineering: How is it that you came to work in AI and cognitive science?

Melanie Mitchell: My undergraduate major was mathematics, but I was also very interested in physics and astronomy. Right after I graduated, I read Douglas Hofstadter’s book, Gödel, Escher, Bach, which got me completely enamored with the topic of cognitive science, AI, and thinking about intelligence. I decided I wanted to work in the field of AI and wanted to work for Hofstadter in particular.

So, I sought him out and became a graduate student in his group in the computer science department at the University of Michigan, even though I didn’t have a background in computer science. I had to struggle to get up to speed in those courses. 

IE: You open your book, Artificial Intelligence: A Guide For Thinking Humans, with a story that shows how much Hofstadter worried that humanity might lose something if general AI were one day realized. His fear wasn’t that machines would be smarter than us, but that the things that make humans unique might be depressingly easy to mechanize. Do you ever have such existential worries about AI and its capabilities? 

I do. It’s even gotten more intense in the last few years. We’ve seen these unbelievable generative AI systems like GPT-3 and other language models that can generate very human-like, seemingly creative text; [as well as] these text-to-image systems like DALL E and others that can generate these amazing images in all kinds of different styles from a text prompt. It never seemed possible before that language and artistic expression could be mechanized using relatively simple ideas, by collecting a huge amount of data.

Sentient chatbots, Douglas Hofstadter, and why general AI is still a long way off
via Melanie Mitchell

Having these systems learn from human-created data, that really surprised me, how far that could go. These ideas about mechanizing human creativity is a worrying thing to me. There’s still a lot of things these systems can’t do, but they’re getting better and better as they get larger and larger and are trained on more and more data. 

IE: The exact processes that allow a lot of language processing models to operate are actually quite vague, and it’s unclear whether they are simply memorizing, or learning some kind of conceptual representation of the world through the vast amount of language they digest and train on. Where does that ambiguity come from, exactly?

I would say it’s mostly the fact that these models are so huge and untransparent, even to the people who created them. GPT-3 has billions of parameters and these new models have close to a trillion parameters. Here a parameter means a weight in a neural network. So, you train these systems on terabytes of human-created images and text, and what they learn is new weight-values over these billions of weights.

So, it’s hard to look inside and say, what did they learn? And when they produce something, like an incredible image or an essay or a conversation, you can’t really look inside and say what it’s using from its training data to create this. No one can do that. No one knows how to do that, and no one knows very well how to probe the systems for understanding what they did learn. 

That being said, there is some lack of transparency from companies that make these systems because they don’t tell us what training data they used. They don’t release the system models themselves. They give you access through an API. But you can’t really look under the hood. But the main problem is that the models themselves are so huge and untransparent to anyone.

IE: Google recently suspended one of its engineers for claiming its chatbot had “gained sentience.” What is your take on that claim? 

The story is a little complicated. People have been fooled by AI chatbots for decades, even going back to the early days of ELIZA, the dumbest chatbot ever. Its creator, Joseph Weizenbaum back in the 1970s, was so alarmed by people’s reaction to it that he wrote a kind of anti-AI book about it. He thought it was going to be very dangerous that humans could be taken in so easily. This was in the 70s. Things have only gotten worse because these chatbots only get better and more and more human-like. 

Quote
"We don’t have a rigorous definition of what it is to be sentient."

I don’t know that much about the specific engineer except what I’ve read in the media. He himself identified as a very religious person who was making this assessment of sentience in his role as a Christian priest as opposed to his scientific training. That complicates it a little bit. It colors his view of it. If you read the transcript that he published, it’s a very leading conversation, led by him.

Sentient chatbots, Douglas Hofstadter, and why general AI is still a long way off
via Possessed Photography/Unsplash

He didn’t really try to probe it in any skeptical fashion. Most AI people I know, including all the people at Google, said no, it’s not sentient. It’s computing probabilities over words to figure out what word to output next. It doesn’t have any activity or memory between conversations. There’s no way it could be sentient in any meaningful sense of the word. 

That being said, we don’t have a rigorous definition of what it is to be sentient. We can’t even agree among ourselves as humans as to whether certain animals are sentient. The word itself is not well-defined. That doesn’t mean it has zero meaning, but I think people can agree that this system is not sentient, but also say that we don’t really know how to determine if somebody or something is “sentient.” That word just isn’t a very scientific one. 

IE: You mention in your recent article, "Why AI Is Harder Than We Think," that the common wisdom in AI research is that easy things are hard, and vice-versa. In the course of your career, what progress have you seen regarding AI being able to do tasks that are simple for humans but actually quite involved in their mechanisms? 

There has been a lot of progress. You can give a photograph to a deep neural network and it will tell you what all the objects are and sometimes it can create a caption describing the photograph. We can have systems that can have conversations with us, they’re not perfect.

Quote
"We use human mental terms to describe AI systems quite glibly sometimes."

If you had a conversation with LaMDA of Google, you could easily get it to say things that show that it’s not really understanding human concepts. It’s a very sophisticated kind of simulacrum of understanding. You can show that it lacks a lot, in terms of its ability to reason and make sense of human concepts. 

The thing is that the systems that we see through these very large deep neural networks are very good but they are not always reliable. They can make errors that are unpredictable. For instance, there was a self-driving car system that would slam on the brakes for no [apparent] reason. After some investigation, they figured out that there was a billboard of a sheriff holding up a picture of a stop sign, saying “Stop drugs,” or something. The car thought it was a real stop sign. 

So, those kinds of errors are very possible with these systems. They don’t have the common sense understanding that we humans have. So they aren’t completely reliable and they are vulnerable to people manipulating their input data in interesting ways that wouldn’t fool a human but can fool these systems. We’re not there yet. We don’t have reliable intelligent AI systems. How long it is until we get them?  It’s hard to say. I personally think the problem is more difficult than a lot of other people in the field [believe].

IE: Do you think that AI systems getting better at doing simple tasks represents a significant step in the direction of achieving general AI? Or do they still need to develop more in terms of being flexible in what they learn and are able to do?

We need both. We need systems that can learn to do new things without huge amounts of training, without billions of examples. That have a richer understanding of the way the world works. I look at you on the screen and I know that I’m not seeing the real you, I know why you have plants behind you. You probably think it’s a decorative thing, you’re not going to eat them, you’re not raising them for food. There are just so many things that machines don’t know about the world that limits their understanding.  

IE: You've written that we should be careful in using anthropomorphized terms to describe AI because it misconstrues what’s really going on in these systems. But, given that at least some natural language processing AI seems to mimic how the brain processes language, and some convolutional neural networks trained to recognize photos do so similarly to how the brain processes visual stimuli, is this anthropomorphizing always a misstep, in your view? 

Interesting question. I think we’re kind of at the beginning of figuring out how to compare these systems and brains and there’s quite a bit of controversy about some of those results. That being said, we don’t really understand a lot about the brain and, for instance, how it processes language. We can see similarities between activations detected by brain imaging like fMRI and some areas in the brain and some activations in neural networks. But there are a lot more complicated things going on. 

Quote
"People in AI assume that intelligence is this separable thing of the brain from the body."

I take those similarity comparisons a little bit with a grain of salt right now. Maybe there are some cases where these comparisons are fair, but on the other hand, we use human mental terms to describe AI systems quite glibly sometimes. Like in neural network literature, people talk about neurons and simulated neurons which are very different from real neurons.

Sentient chatbots, Douglas Hofstadter, and why general AI is still a long way off
via Alexander Sinns/Unsplash

Or learning, we talk about the machine notion of learning, which is a very different process in machines and in humans. But if we use the same term, we kind of assume that if a machine has learned how to play chess better than humans, its learning will allow it to of course play slight variations on chess, like a human could. But it turns out they can’t. 

There’s a whole area of AI called transfer learning. Using something you’ve learned and applying it to something quite similar but not exactly the same. That is what learning is in humans but it’s not what it is in machines. So I do worry about using some of these terms that we apply to human mental states that don’t really apply to machines and give the wrong impression. 

IE: The fourth fallacy that you wrote about in that essay says that we shouldn’t assume that intelligence only resides in the brain. What led you to take this position and what kind of pushback have you gotten from the scientific community for doing so?

There’s a long history of the idea of embodiment in cognition. That it’s not just the brain that’s causing intelligent behavior in us. It’s our brains, it’s our bodies, it’s our interaction with the environment, including social interactions, and so on.

Quote
"There’s nothing in the laws of physics or mathematics that would preclude [general AI]."

And there’s a lot of psychological evidence for that. I think that to ignore that and try to just recreate a brain in a vat — which is essentially what AI is trying to do in a way — that’s maybe missing a big part of what makes it possible to be generally intelligent. That’s something that’s been debated about in cognitive science for a long time. 

People in AI assume that intelligence is this separable thing of the brain from the body that we can implement on computers. I don’t know [what role] embodied intelligence has to play, that’s a scientific question I don’t think we’ve answered yet. But I think it’s wrong for people in AI to just dismiss that idea.

And I think more and more people are realizing that embodiment is important. But what exactly does that mean in AI systems? Can you be embodied in a virtual world, a metaverse, or do you actually have to have a body and interact with the world? 

IE: What does the near and distant future of AI look like to you? Yes, general AI certainly seems like it’s very far off, maybe even unreachable. But is there a version of the future in which we get there? Do you think it’s possible? 

There’s nothing in the laws of physics or mathematics that would preclude that possibility. I have a very mechanistic philosophy, I think that all of us are essentially machines in some sense. Our intelligence emerges from the non-conscious electro and chemical activity of our brains and bodies. So, I don’t see why we couldn’t create it. I don’t think that 15 or 20 years is going to be enough. It’s a very hard problem.

The philosopher Hubert Dreyfus, who wrote a lot about AI before he died, said that the big problem was the common sense knowledge problem. Meaning, how babies learn about the world and objects and how other agents have goals, and so on. We learn all of this very rich conceptual structure of how the world works and how the social world works. That is the common sense knowledge problem and we don’t know how to give that to machines. I agree with Dreyfus that that’s one of the biggest obstacles. 

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board