Jump directly to the Content

News&Reporting

Put Not Your Trust in ChatGPT, for Now

Q&A with a veteran AI engineer and entrepreneur, Tom Kehler, about the limits of the popular chatbot and the wonders of the human brain.
|
Français
Put Not Your Trust in ChatGPT, for Now
Image: Illustration by Mallory Rentsch / Source Images: Getty

Tom Kehler has worked in artificial intelligence for more than 40 years, as a coder and a CEO. He grew up a preacher’s kid and got into mathematical linguistics in high school. After earning a PhD in physics, he wanted to do linguistics with Wycliffe Bible Translators, but “God kept closing that door,” he says, and instead he found himself working with natural language processing in computing.

He had a stint in academia before joining Texas Instruments in 1980, where he began working with top AI researchers. He ended up in Silicon Valley, founding and leading several startups involving AI, including IntelliCorp and CrowdSmart.

Developments in AI appear to be speeding along: This week Microsoft announced that it is investing $10 billion in OpenAI, which created the popular chatbot ChatGPT. One of OpenAI’s top researchers described current neural networks as “slightly conscious.” Kehler has his doubts.

What were the questions about AI in the 1980s when you were first working on it?

“Is it going to replace my job?” In many cases, the answer is yes. We need to be thinking about continuous education—you may not be doing the assembly line, but you may be operating the machinery that does the assembly line. The other question that comes up is this notion of the singularity [when AI outstrips humans]. The sentient question comes up. But I think we’re a very long way from that.

Why is there an obsession with sentient AI?

If you are a person of nonbelief, you want to create something that gives you hope in the future. On the AI side, we want something that will cause us to have eternal life—my consciousness is going to go into eternity because it’s in a machine. I think that drives some of these notions of generalized AI, like [Ray] Kurzweil’s singularity obsession. It speaks more of the human desire than of where we are in terms of our progress.

What do you make of the Google engineer last year who said his chatbot had become sentient and had a soul?

I think he spent too much time with his laptop, honestly. We work with the same kind of large language models. It’s called transformer models. My whole career, I’ve been focused on natural language processing—a field that’s been around for some time. All of those models were built by aggregating information off of things like Wikipedia. It was the echo of human intelligence.

The way these systems work, we’ll say, “This is the number seven.” We keep reinforcing until the neural network can recognize that seven. That correlation of events is the core way AI works now.

Here is a system that will turn my head: You take an empty system, and it has the capability of learning language at the speed of a child. The way kids acquire language is truly mind-blowing. And not just language, but even if you go open the cupboard door—they see something once, and they figure out how to do it.

The system that this Google engineer was talking about, it was given trillions of examples in order to get some sense of intelligence out of it. It consumed ridiculous amounts of energy, whereas a little kid’s brain requires the power of a flashlight, and it’s able to learn language. We’re not anywhere close to that kind of general AI.

We underestimate how little we know about how the brain works. And there’s overconfidence in the tools we have so far. [Computer scientist] Judea Pearl in The Book of Why makes a case that deep learning gives us animal levels of intelligence, which is correlation on inputs and data. That’s going to get really good. That’s what helps us with cool things like ships that can go across the ocean now without any pilot. What we can do with AI is incredibly powerful, but it’s not the same thing as saying this is now an intelligent being.

So where do you want AI to go?

There is scientific evidence that the problems we need to solve are way too complex for any one person. And we need to use collective intelligence to figure out how to solve some of these big problems. I believe AI can be a huge benefit, and not a threat, to human development.

The popular new AI chatbot, ChatGPT—what’s good about it?

It puts in a very accessible form the knowledge that has been captured for a very long time. It’s a very useful utility if you’re asking it general-knowledge questions, like an encyclopedia. It presents it in a much better form than doing a search where you’re going to get a group of links and you have to put the story together yourself.

But you think it has problems too.

It’s taking inputs to build its knowledge. It doesn’t check the truth value or, what’s called in information systems, data lineage. Where did this data come from? Do we know it’s true? It’s translating input text to output text based on some objective.

Let’s say you’re using ChatGPT for taking action or making a decision. What happened over the last six, seven years—in the bad old days of AI—AI was used to manipulate people’s opinions. There were campaigns to mimic the truth but twist it.

You can do that with something like ChatGPT. You can’t use it like how we would use science, where we might make a decision based on science to create a drug. There needs to be a human process of finding out if it’s trustworthy or not.

How do we create a trustworthy chatbot?

If you think about how scientific knowledge or medical knowledge was developed, it’s by peer review. We as a human race have considered that trustworthy. It’s not perfect. But that’s how we normally build trust. You have 12 of the world’s best cardiac surgeons say a certain procedure is good, you’re going to say, “Yeah, that’s probably good.” If ChatGPT told you to do that procedure, you’d better have it reviewed by somebody, because it could be wrong.

I believe it’s ethically critical that we keep humans in the loop with developing artificial intelligence technology. We’ve seen where AI systems can beat somebody at chess, but that’s a skill set. That’s not demonstrating that they can be trusted for the things we humans call wisdom—how to live.

Why does truthfulness in chatbots matter to Christians?

Faith is about the evidence of things hoped for. When people think of faith as just a leap, that’s kind of not true. We have decisions in faith because of evidence.

Now, think about what happened in Christianity when there was misinformation. It caused fragmentation, right? QAnon stuff started to get propagated. Information is getting propagated where its truth value hasn’t been determined. It causes divisions in families.

At the very core, we should be focused on what is true. This notion fits in with Philippians—what is true, what is of good report, what is creating a greater common good. This is the original plan of Christianity, the kingdom of God emerging.

This belief that we can try to find agreement and come together is at the very core of what I have envisioned for artificial intelligence. It’s a scientific principle: We have peer-reviewed evidence that we trust and that moves science forward.

You’ve argued certain mathematical models themselves help build more trustworthy AI.

Every AI engineer on the planet knows about Bayesian applications, because that’s fundamental to most of AI now. Bayesian learning has its roots in a Presbyterian minister in the 1700s.

Bayesian thinking says, “How does evidence change my beliefs?” I form new beliefs based on evidence. You can use Bayesian models to build much richer kinds of intelligent systems, and you can have it be explainable. It can tell you how it got an answer.

The Silicon Valley crowd that really believes in the singularity—that this is the way that we’re going to achieve eternal life—they don’t realize that a lot of the underpinnings of this were invented by people who had a deep faith in God. I find that interesting and fun.

Have you used ChatGPT?

I asked it to write an essay about how large language models will destroy human society as we know it. And it does a beautiful job of saying why this will destroy human society as we know it.

It’s a great piece of technology. I’m not trying to negate it but to say, “This is how far you can go with it.” And I’ve got a deeper ethical problem: It’s a bit of a dangerous thing to start thinking of the machine as superior in intelligence to humans, particularly if it’s not based on any foundation of ethics.

You don’t want an automaton that starts to do things and you don’t know why or how it’s doing them. Explainability is very important. There’s probably not enough elevation of thinking about where this is taking us and where we want to guide it. We need people who are thinking deeply about the spiritual implications.

[ This article is also available in Français. ]

March
Support Our Work

Subscribe to CT for less than $4.25/month

Read These Next

close