Meet Pause AI, the Protest Group Campaigning Against Human Extinction

Meet Pause AI, the Protest Group Campaigning Against Human Extinction

But the Pause AI founder also worries about a future where AI advances enough to be classified as “super-intelligent” and decides to wipe out civilization, once it understands that humans limit AI’s power. He echoes an argument, also used by Hinton, that if humans ask a future super intelligent AI system to fulfill any goal, AI might create its own dangerous sub-goals in the process.

This concern dates back years and is generally credited to the Swedish philosopher and Oxford University professor Nick Bostrom, who first described in the early 2000s what hypothetically could happen if a super-intelligent AI was asked to create as many paperclips as possible. “The AI will realize quickly that it would be much better if there were no humans, because humans might decide to switch it off,” Bostrom said in a 2014 interview. “Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

AI research is a divided field and some experts who might be expected to rip Meindertsma’s ideas apart, instead seem reluctant to discredit them. “Because of the rapid progress, we just don't know how much of science fiction could become reality,” says Clark Barrett, co-director of Stanford University’s Center for AI Safety in California. Barrett does not believe a future where AI helps develop cyber weapons is plausible. This is not a field where AI has so far excelled, he claims. But he is less willing to dismiss the idea that an AI system that evolves to be smarter than humans could work maliciously against us. People worry that an AI system “could try to steal all of our energy or steal all of our compute power or try to manipulate people into doing what it wants us to do.” This is not realistic right now, he says. “But we don't know what the future can bring. So I can't say it's impossible.”

Yet, other AI researchers have less patience with the hypothetical debate. “For me, it is a problematic narrative that people claim any kind of proof or likelihood that AI is going to be self conscious and turn against humanity,” says Theresa Züger, head of Humboldt University's AI and Society Lab, based in Germany. “There is no evidence that this is going to appear and in other scientific fields, we wouldn't discuss this if there is no evidence.”

This lack of consensus among experts is enough for Meindertsma to justify his group’s demand for a global halt to AI development. “The most sensible thing to do right now is to pause AI developments until we know how to build AI safely,” he says, claiming that leaps forward in AI capabilities have become divorced from research on safety. The debate about how the relationship between these two halves of the AI industry have evolved is also taking place in mainstream academia. “This is something that I've seen getting worse over the years,” says Ann Nowé, head of the Artificial Intelligence Lab at the Free University in Brussels. “When you were trained in the ’80s to do AI, you had to understand the application field,” she adds, explaining it was normal for AI researchers to spend time speaking to people working in the schools or hospitals where their system would be used. “[Now] a lot of AI people are not trained in having this conversation with stakeholders about whether this is ethical or legally compliant.”

Add a Comment