Rise of the Machines: New Book Applies Christian Ethics to the Future of AI

John Lennox reflects on questions of consciousness in computers, enhancing humans, and other quandaries.

Christianity Today September 8, 2020
Illustration by Rick Szuecs / Source images: Andriy Onufriyenko / Getty / Ashton Bingham / Unsplash

Once viewed as the stuff of science fiction, artificial intelligence (AI) is steadily making inroads into our everyday lives—from our social media feeds to digital assistants like Siri and Alexa. As helpful as AI is for many aspects of our lives, it also raises a number of challenging moral and spiritual questions. Facial recognition can be used to locate fugitive criminals, but also to suppress political dissidents. Various apps and platforms can anticipate our preferences, but also harvest data that invades our privacy. Technology can speed healing, but many are hoping to use it to enhance natural human abilities or eliminate “undesirable” emotions.

In his recent book 2084: Artificial Intelligence and the Future of Humanity, Oxford professor emeritus John Lennox surveys the current and future landscape of AI and addresses these and related issues. Lennox is a mathematician who has spoken internationally on the philosophy of science, written books addressing the limits of science, and debated high profile atheists Richard Dawkins and Christopher Hitchens. In the new book, he acknowledges the many benefits AI can offer, but he also critiques the worldview that lies behind many secular visions of AI that seek to transform humans into gods and create utopias through technology.

Christopher Reese spoke with Lennox about his book and how Christians should think about a number of issues related to this rapidly accelerating technology, including “upgrading” humans, whether computers can become conscious, and how Christians should weigh the pros and cons of AI.

Many negative scenarios involving AI have played out in popular movies. In your opinion, are these the kinds of outcomes that we should be concerned about?

We’re nowhere near these negative scenarios yet in the opinion of the top thinkers in this area. But there’s enough going on in artificial intelligence that actually works at the moment to give us huge ethical concern.

There are two main strands in artificial intelligence. There’s narrow AI, which is very successful in certain areas though raising deep problems in others. This is simply a powerful computer working on huge databases, and it has a programed algorithm which looks for particular patterns. Let’s suppose we have a database of a million X-rays of lung diseases labeled by the best doctors in the world, and then you get an X-ray at your local hospital and an algorithm compares yours with the database in just a few seconds and comes up with a diagnosis. So, that’s a very positive thing.

But then you move on to the more questionable things—today the main one has to do with facial recognition. There again, you’ve got a huge database of millions of photographs of faces labeled with names and all kinds of information. You can immediately see that a police force would find that useful in checking for terrorists and criminals. But it can be used for suppressing people and manipulating and controlling them. In China today, there’s every evidence of extreme surveillance techniques being used to subdue the Uyghur minority. That has raised ethical questions all around the world.

This is not the 1984 Big Brother. We’re already there. But it’s not 2084. That’s where the second strand comes in: Artificial general intelligence is where we develop a super intelligence that’s controlling the world. That’s sci-fi stuff.

C. S. Lewis worried that technological advances might lead to the “abolition of man.” Can you elaborate on what he meant by that?

One reason I wrote the book was because of my familiarity with C. S. Lewis. In the 1940s, he wrote two books, The Abolition of Man and That Hideous Strength, which is a science fiction book. His concern was, if human beings ever managed to do this kind of thing, the result wouldn’t be human at all. It would be an artifact.

If you start to play about with humans as they are and introduce genetic engineering, what happens is you create an artifact—that is, something you have made that is not greater than human, but subhuman. In other words, you abolish human beings in that sense. You made something that you think is more than human, but it’s actually less than human because you, who are not God, have contributed to its specification. Lewis thus talks about how the final “triumph” of humanity will be the abolition of man. I think that ought to concern us.

The Bible affirms that human beings have souls and are made in the image of God. How do those ideas factors into your view of machines and their ability (or inability) to imitate humans?

We are made in the image of God. That gives us dignity and value. Some aspects of surveillance seem to infringe on that. They seem to be invasions of privacy, the space that God has given us to function. Certainly, when it comes to controlling people and getting them not to act out of their consciences but to do what the state requires, that can be a very dangerous path. We’re seeing that already, as I have mentioned.

But, of course, once we begin to talk about artificial general intelligence, there are two strands again. The first one is to bioengineer existing humans and turn them into gods. The Israeli historian Yuval Noah Harari has written a book called Homo Deus, which means “the man who is God.” He very straight forwardly says that there are two major agenda items for the 21st century. One is to solve the problem of death as a physical problem, meaning that medicine will go so far that you don’t need to die. You could die, but you don’t need to.

Second is to bioengineer human life to enhance it through technology, drugs, and other things so that we create a superhuman intelligence. That’s where all the dystopian scenarios come from, and that fuels films like The Matrix and so on. They’re scary because we are at the cusp of potentially altering what it means to be human permanently.

Human beings, “version 1.0” as created by God, are by nature special. The specialness of human beings is seen in the fact that God became one. The Word became human, became flesh, and dwelt among us. I take that extremely seriously, and therefore any attempt to make “humans 2.0” is going to be a step away from God’s design, not a step toward God.

Do you think it’s wrong in a Christian framework to try to enhance our abilities, or is there a place for that?

I have enhanced eyesight because I’m wearing a pair of glasses. At the moment, the technology is sitting on my nose and ears. It’s very crude, but I could be wearing contact lenses, which you might not even notice. That kind of enhancement is a very good thing because it’s simply helping with a deficiency in my own eyesight, as would a hearing aid, as would a prosthetic limb. So, there is a place for strengthening limbs, getting better eyesight, and of course dealing with chemical imbalances in our blood and in our brains and in our systems. We’re very grateful for medicine.

But to be very clear, there are pretty obvious limits where we begin to transgress and it’s effectively saying, “God, you did your best, but we can do better. We can improve human beings.” That’s a very risky business. One of the central dangers is playing God by modifying the genetic germ line, which could impact all generations to follow us.

What is your perspective on what’s been called the “hard problem of consciousness?” Can machines ever be conscious?

Here’s the problem: Nobody knows what consciousness is, let alone how to build it. If you’re going to make general artificial intelligence, then you will have to produce consciousness. So the arguments fly forwards and backwards. When people say to me, “What do you think of it all?” I say if you can first tell me what consciousness is, I will listen to you pretty seriously.

Of course, from a Christian perspective, the brain is physical, the mind is not. We have lived to see the information age where we realize that information, which is a non-material entity, has become fundamental to physics and our understanding of the universe. That accords exactly with Scripture, which tells us in the beginning was the Word. Not in the beginning was the universe. The universe is derivative. All things came to be through the Word. So, God the Word is primary. The universe is derivative, whereas atheism believes the exact opposite, that the universe is primary and mind is derivative.

You observe that science substitutes as a religion for some proponents of AI, who see technology as a means of salvation. How do you see science functioning for them religiously?

If you deny God as creator, you don’t get rid of the idea of creation because you’ve got to explain life, and in particular human life and consciousness. So, you often end up endowing material elementary particles with creative powers—which there’s no evidence that they have—so that the material universe has got to, in some sense, create life and create itself, which is philosophical nonsense.

I've spent my life trying to unpack these things so that people can understand just how crazy some of these things are, but they are what results by rejecting the creator. Paul put it well at the beginning of Romans. He says rejecting the creator means you become intellectually dark and you start talking nonsense. There's a great deal of it around, but because it is said by powerful scientists, people take it seriously. They don't remember what one of our most famous scientists, Richard Feynman, the Nobel Prize winner of physics, once brilliantly said: “Outside his or her field,” he said, “the scientist is just as dumb as the next guy.”

You write about possible connections between AI and events described in biblical prophecy. How do you see those things potentially fitting together?

Well, I’m very cautious here. There’s always a great skepticism when you mention biblical scenarios of the future. But for me, the bottom line is this: If we are prepared to take seriously, as many people are, highly dystopian situations in the future where you have a world dictator who controls economics by having some kind of implant in people’s skin or in their eyes or something like that—why don’t we go back to the scenario that is presented, at least in outline, in the Bible and compare it with these scenarios? Certain elements are very much in common. The idea is something that’s not simply the apocalyptic literature of the book of Revelation, but straightforward theological writing, as in Paul’s letter to the Thessalonians.

How should Christians weigh the potential benefits of AI with its possible—and actual—abuses?

Well, I would reply by saying, how do you weigh the benefits of a very sharp knife? A very sharp knife can be used to do surgery and save people’s lives. It can also be used for murder.

What I do with my Christian friends is, if they’re scientifically inclined, I say: IT, computer technology is a fascinating area to be in, and there’s so much good that can be done. One of the wonderful examples of that is at MIT where Rosalind Picard, who is a brilliant scientist, a Christian, has developed her own field called affective computing. She’s using facial recognition techniques to find signs of children having seizures before they happen and preventing them.

But every technological invention has potentiality for good and evil. The issue is not that one resists advance, but one learns to control that advance and set it into an ethical framework. The problem with that today is that the technology is outpacing the ethics at a colossal speed. People haven’t had time to think.

Some are concerned about what’s happening and they’re trying to set up international boards and ideas of basic ethical principles that need to be built into AI. All that is well and good, but we’re dealing at an international level. It depends on who’s got the most power. If people don’t have normative ethical principles that are transcendent, as Christianity gives us, then of course power will determine what’s believed.

Christians need to be able to sit credibly at the table with their non-Christian colleagues, discuss these things sensibly, and help other people think through the ethical issues.

Christopher Reese is the managing editor of The Worldview Bulletin, co-founder of the Christian Apologetics Alliance, and general editor of Three Views on Christianity and Science (forthcoming from Zondervan, 2021).

Our Latest

News

12 Christian Leaders Who Died in 2024

Remembering Tony Campolo, Jürgen Moltmann, Paul Pressler, and others.

News

20 Stories About a Vibrant Global Church

Mennonites thriving in Paraguay, architecturally stunning church buildings in China, and persistent faith amid Haiti’s pervasive gang violence.

The Bulletin’s Favorite Conversations of 2024

In a tempest-tossed political and cultural season, these episodes anchored us.

Christianity Today’s 10 Most Read Asia Stories of 2024

Tightening restrictions on Indian Christians, the testimony of a president’s daughter, and thoughts on when pastors should retire.

News

13 Stories from the Greater Middle East and Africa From 2024

Covering tragedy, controversy, and culinary signs of hope, here is a chronological survey of Christian news from the region.

CT’s Best Ideas of 2024

A selection of 15 of our most intriguing, delightful, and thought-provoking articles on theology, politics, culture, and more.

Big CT Stories of 2024

Ten of our most-read articles this year.

CT’s Most Memorable Print Pieces from 2024

We hope these articles will delight you anew—whether you thumb through your stack of CT print magazines or revisit each online.

Apple PodcastsDown ArrowDown ArrowDown Arrowarrow_left_altLeft ArrowLeft ArrowRight ArrowRight ArrowRight Arrowarrow_up_altUp ArrowUp ArrowAvailable at Amazoncaret-downCloseCloseEmailEmailExpandExpandExternalExternalFacebookfacebook-squareGiftGiftGooglegoogleGoogle KeephamburgerInstagraminstagram-squareLinkLinklinkedin-squareListenListenListenChristianity TodayCT Creative Studio Logologo_orgMegaphoneMenuMenupausePinterestPlayPlayPocketPodcastRSSRSSSaveSaveSaveSearchSearchsearchSpotifyStitcherTelegramTable of ContentsTable of Contentstwitter-squareWhatsAppXYouTubeYouTube