How to Start an AI Panic

How to Start an AI Panic

The recommended next step takes place after we’ve turned off the AI faucet. We use that time to develop safety practices, standards, and a way to understand what bots are doing (which we don’t have now), all while “upgrading our institutions adequately to meet a post-AI world.” Though I’m not sure how you do the last part, pretty much all the big companies doing AI assure us they’re already working through the safety and standards stuff.

Of course, if we want to be certain about those assurances, we need accountability—meaning law. No accident that this week, the Center repeated its presentation in Washington, DC. But it’s hard to imagine ideal AI legislation from the US Congress. This is a body that’s still debating climate change when half the country is either on fire, in a drought, flooded by rising sea levels, or boiling at temperatures so high that planes can’t take off. The one where a plurality of members are still trying to wish away the reality of a seditious mob invading their building and trying to kill them. This Congress is going to stop a giant nascent industry because of a bunch of slides? 

AI’s powers are unique, but the struggle to contain a powerful technology is a familiar story. With every new advance, companies (and governments) have a choice of how to use it. It’s good business to disseminate innovations to the public, whose lives will be improved and even become more fun. But when the technologies are released with zero concern for their negative impact, those products are going to create misery. Holding researchers and companies accountable for such harms is a challenge that society has failed to meet. There are endless cases where human beings in charge of things make conscious choices that safeguarding human life is less important than, say, making a profit. It won’t be surprising if they build those twisted priorities into their AI. And then, after some disaster, claim that the bot did it!

I’m almost tempted to say that the right solution to this “dilemma” is beyond human capability. Maybe the only way we can prevent extinction is to follow guidance by a superintelligent AI agent. By the time we get to GPT-20, we may have our answer. If it’s still talking to us by then. 

Time Travel

Thirty years ago I wrote a book called Artificial Life, about human-made systems that mimicked—and possibly, qualified as—biological entities. Many of the researchers I spoke to acknowledged the possibility that these would evolve into sophisticated beings that might obliterate humanity, intentionally or not. I had a lot of discussions with A-life scientists on that subject and shared some transcripts with the Whole Earth Review, which published them in the fall of 1992. Here’s a bit from an interview with scientist Norman Packard of the Santa Fe Institute.

Steven Levy: I’ve heard it said that this is potentially the next evolutionary step, that we’re creating our successors.

Norman Packard: Yeah.

Add a Comment