To Navigate the Age of AI, the World Needs a New Turing Test

To Navigate the Age of AI, the World Needs a New Turing Test

In humans, this would be known as dehumanization. Scholars have identified two principal forms of it: animalistic and mechanistic. The emotion most commonly associated with animalistic dehumanization is disgust; Roger Giner-Sorolla and Pascale Sophie Russell found in a 2019 study that we tend to view others as more machinelike when they inspire fear. Fear of superhuman intelligence is vividly alive in the recent open letter from Elon Musk and other tech leaders calling for a moratorium on AI development, and in our anxieties about job replacement and AI-driven misinformation campaigns. Many of these worries are all too reasonable. But the nightmare AI systems of films such as Terminator and 2001: A Space Odyssey are not necessarily the ones we’re going to get. It is an unfortunately common fallacy to assume that because artificial intelligence is mechanical in its construction, it must be callous, rote, single-minded, or hyperlogical in its interactions. Ironically, fear could cause us to view machine intelligence as more mechanistic than it really is, making it harder for humans and AI systems to work together and even eventually to coexist in peace.

A growing body of research shows that when we dehumanize other beings, neural activity in a network of regions that includes the mPFC drops. We lose access to our specialized brain modules for social reasoning. It may sound silly to worry about “dehumanizing” ChatGPT—after all, it isn’t human—but imagine an AI in 2043 with 10 times GPT’s analytical intelligence and 100 times its emotional intelligence whom we continue to treat as no more than a software product. In this world, we’d still be responding to its claims of consciousness or requests for self-determination by sending it back to the lab for more reinforcement learning about its proper place. But the AI might find that unfair. If there is one universal quality of thinking beings, it is that we all desire freedom—and are ultimately willing to fight for it.

The famous “control problem” of keeping a superintelligent AI from escaping its designated bounds keeps AI theorists up at night for good reason. When framed in engineering terms, it appears daunting. How to close every loophole, anticipate every hack, block off every avenue of escape? But if we think of it in social terms, it begins to appear more tractable—perhaps something akin to the problem a parent faces of setting reasonable boundaries and granting privileges in proportion to demonstrated trustworthiness. Dehumanizing AIs cuts us off from some of our most powerful cognitive tools for reasoning about and interacting with them safely.

There’s no telling how long it will take AI systems to cross over into something more broadly accepted as sentience. But it’s troubling to see the cultural blueprint we seem to be drawing up for when they do. Slurs like “stochastic parrot” preserve our sense of uniqueness and superiority. They squelch our sense of wonder, saving us from asking hard questions about personhood in machines and ourselves. After all, we too are stochastic parrots, complexly remixing everything we’ve taken in from parents, peers, and teachers. We too are blurry JPEGs of the web, foggily regurgitating Wikipedia facts into our term papers and magazine articles. If Turing were chatting with ChatGPT in one window and me on an average pre-coffee morning in the other, am I really so confident which one he would judge more capable of thought?

Photograph: Francisco Tavoni

Add a Comment