My son saw a shark today.
In his two short years, this was the first time he beheld the creature. His eyes went wide. Would he fear the thing?
His head tilted a few degrees left, betraying a smile. Not knowing what it was or its name, he christened it himself. Pointing, he proclaimed, “Agu dalpha!”—his toddler parlance for “alligator dolphin.”
As a linguist and translator, I was charmed. What a peculiar name. What a sensible name.
He saw the creation, and he named it, tapping into that ancient stream of humanity flowing back to our first ancestors.
In mere fractions of a second, his mind sifted his entire knowledge base. Visual, motor, linguistic, and associative information danced across the trillions of synapses in his brain, pivoted in his frontal cortex, then flew back through the Wernicke’s area, arcuate fasciculus, and Broca’s area and was ultimately heralded out by the motor cortex controlling his speech articulations.
These so-called “basic” mental processes are extremely complicated. Legions of academics and corporations toil round-the-clock to teach computers to approximate just one of these steps. The who’s who of star tech companies together spend billions to crack even a sliver of the great problem of true intelligence.
But my toddler, unbidden and unsupervised, does this entirely of his own agency. He is an analog masterpiece. And he is not unique in this. In his young mind, unsupervised learning meets unsupervised creation. These are hallmarks of human intelligence.
Currently, we are the only created beings in the universe that can do these things—at least that we know of. Computers cannot perform unsupervised learning like we do. Even their supervised learning is shaky.
But every day, artificial intelligence gets closer. Will they ever match our curiosity? Our creativity?
Though we take it for granted, the method humans employ for learning is quite remarkable. Our learning is unstructured, unsupervised, creative, intuitive, and dripping with common sense. For example, most of what we learn in our lives is untaught. Ninety-five percent of our vocabulary is plucked out of the ether. Nobody taught it to us, we simply learned it along the way. We were encouraged to walk, to catch a ball, to make friends. We were given opportunities to try and fail, and then try again, but we were never strictly taught these things. The mind can never be taught how to catch a ball. We can learn where to place our hands or how to catch with better form, but there is simply no equation. Imagine explaining the midair kinematics to a toddler. The mind’s hidden algorithms must simply pick it up through repetition and minimizing prediction error. A robot must have the numbers.
We are intuition machines, remarkably accurate in wild guesses with very little evidence behind our intuitions.
The way a computer learns is, shall we say, less poetic. In supervised learning, a computer is told, “This is a shark” and then shown millions of pictures of sharks. In the space of a few seconds, a computer analyzes more sharks than we could ever lay eyes on in a lifetime. The computer is then asked to identify sharks in unlabeled pictures and corrected by a human or a human-written program when wrong. And still they’re quite awful at it.
But by playing to their strengths, they’re improving. Through something called generative adversarial networks, computer programs are made to compete against each other in a sort of war games scenario. They each attempt to fool the other, sometimes millions and billions of times, and then they sharpen each other based on the results—all while their programmers sleep in their beds. This can be done with any set of data. Facial recognition, biometrics and DNA, or pictures of sharks.
Skymind Inc. describes these AI duels as a cat and mouse game between a counterfeiter and a government intelligence officer. The counterfeiter keeps trying to create a forgery that passes for genuine, and the agent tries to discern the bogus from the authentic. As they compete, they both get sharper.
But this playing to their strengths has only gotten them so far. Big data can approximate a kind of intuition, but it’s not intuition.
Maybe this is because the ultimate intuiters—humans—don’t think like this much at all. We don’t endlessly compete or gamify our world—we’re instead curious prophets.
A toddler’s fascination with a light switch isn’t founded upon a striving for survival and sexual advantage for the passing on of genetic code. Rather, it’s founded upon curiosity—the desire to solve the world around us. It just so happens that some of the time, this behavior is also conducive to survival. Ask any parent of toddlers, and that same curiosity is also what’s most likely to send that child to the hospital or worse. It seems survival isn’t our deepest driving force, but something more like curiosity—or more technically, in reducing surprise in our environment.
The human mind’s foundational goal is to constantly predict what’s going to happen, then measure what happens against its prediction, then use whatever discrepancy to make better predictions next time. We do this millions of times per day. We are entropy reduction wizards.
This method of learning by reducing surprise—which many in the academy call the “free energy principle”—is beginning to be employed in AI. And as we’d expect, AI programs based on this model act less like competitive Darwinian nightmares and more like prediction engines. And not surprisingly, this makes them seem more alive—more akin to a curious cat or, in inspired moments, even a baby.
And here we have to ask, what will it mean for us if AI ever arrive? What if someday, they too name the animals for the pure joy of it?
This unsettling question reveals a primal uncertainty. What makes us unique? Are we, in fact, special?
Absolutely.
Though much of the dread around machine sentience is well placed and shared by brilliant scientists and innovators like the late Stephen Hawking and Elon Musk, most of it is misdirected. It’s easy to equate the image of God with creativity and reason. We fear that if another being joins us on the lonely pedestal of true intelligence, we’ll lose our uniqueness in God’s image. But this is not true.
For millennia in the West, and especially since the Enlightenment, the ability to reason and create and enact change in our world has been tacitly assumed as the meaning of the image of God. The fact that those who write philosophy and ethics and biblical commentaries make their entire living by enacting change in the world through reason certainly isn’t lost on us as we consider our intellectual heritage—or baggage—in this matter.
This, however, is not what the ancient Hebrews would have had in mind. God’s image is not the ability to reason, though that’s one of a thousand ingredients that may be part of it.
It’s so easy to lose ourselves in the difference between our humanness and the image of God. God imbued us with his spark, with artistic and aesthetic brilliance, with logic, reason, and language. These things are given by God that we might know him, know one another, and carry out his mandate. These talents enable us to care for his creation and subdue the earth as his ambassadors. They allow us to spread far and wide, filling the earth and building cities and creating culture. They allow us to receive his revelation and meticulously copy it and guard it and proclaim it so that every corner of the globe might hear his good news.
But these talents are not his image. They are his toolbox. His equipment on loan, for which someday we’ll give an account.
Rather, we recognize that just as the animals create after their own kind, so God created us, in a sense, after his own kind. We bear his stamp—his image and likeness. And even if we are in the throes of mental degeneration, or if we’re severely mentally handicapped, or if we’re a fetus in utero, and not yet or not ever capable of reason, we know we are equally made in God’s image.
My shark-naming son will always be my son. His standing before me centers on the fact that he is my son, entrusted by God to my wife and me, and not somebody else. He’s not the most creative, most curious, smartest, or most empathetic child in the world. But his significance to me as my son does not center on his mental prowess. The existence of a smarter child down the block does not mean my son is in danger of being given up to the state. He is my son. In him I am well pleased.
We stand in the same unique relation to God. We are his children whether or not we’re the smartest on the block—or the universe.
It’s too early to tell if artificial intelligence will ever join us on the solitary pedestal of true intelligence, but we have nothing to fear. We might make AI for ourselves, but God has made us for himself. Our identity as image-bearers is anchored in our Maker, not our talents.
Jordan Monson previously worked on Bible translation as a linguist and is now serving as a church planting pastor in St. Paul, Minnesota. Capital City Church launches in March 2019. Connect at @jordanmonson or via email.