Bill Gates isn’t too scared about AI

Bill Gates isn’t too scared about AI

The billionaire business magnate and philanthropist made his case in a post on his personal blog GatesNotes today. “I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them,” he writes.

According to Gates, AI is “the most transformative technology any of us will see in our lifetimes.” That puts it above the internet, smartphones, and personal computers, the technology he did more than most to bring into the world. (It also suggests that nothing else to rival it will be invented in the next few decades.)

Gates was one of dozens of high-profile figures to sign a statement put out by the San Francisco–based Center for AI Safety a few weeks ago, which reads, in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But there’s no fearmongering in today’s blog post. In fact, existential risk doesn’t get a look in. Instead, Gates frames the debate as one pitting “longer-term” against “immediate” risk, and chooses to focus on “the risks that are already present, or soon will be.”

“Gates has been plucking on the same string for quite a while,” says David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in the UK. Gates was one of several public figures who talked about the existential risk of AI a decade ago, when deep learning first took off, says Leslie: “He used to be more concerned about superintelligence way back when. It seems like that might have been watered down a bit.”

Gates doesn’t dismiss existential risk entirely. He wonders what may happen “when”—not if —“we develop an AI that can learn any subject or task,” often referred to as artificial general intelligence, or AGI.

He writes: “Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all? But thinking about these longer-term risks should not come at the expense of the more immediate ones.”

Gates has staked out a kind of middle ground between deep-learning pioneer Geoffrey Hinton, who quit Google and went public with his fears about AI in May, and others like Yann LeCun and Joelle Pineau at Meta AI (who think talk of existential risk is “preposterously ridiculous” and “unhinged”) or Meredith Whittaker at Signal (who thinks the fears shared by Hinton and others are “ghost stories”).

Add a Comment