Marc Andreessen occasionally sets the world on its ear with a sweeping hypothesis about the dawn of a new technological era. In his legendary 2011 blog post “Why Software Is Eating the World,” the cofounder of Andreessen Horowitz made the then-novel, now-undeniable case that even the most old-school industrial companies would soon have to put software at their core. In 2020, as Covid-19 caught the world desperately short of masks and nasal swabs, he published “It’s Time To Build,” a call to arms for reviving investment in technologies that could solve urgent problems like pandemics, climate change, crumbling infrastructure, and housing shortages.
Now he’s back with a 7,000-word screed, another stab at framing the narrative; this time, the story is that “AI will not destroy the world, and in fact may save it.” Much of it is devoted to debunking AI doom scenarios, and the rest to touting AI as little short of a civilizational savior.
This is of course predictable. Andreessen invests in technological revolutions, so he has little incentive to do anything but hype them up. His post does have value, though, in two ways. First, its obvious blind spots are a useful guide to the thinking of the biggest AI hypesters and where they go astray. Second, its takedown of some of the more hysterical AI fears is actually (somewhat) on target.
So let’s dive in.
Andreessen tips his hand early by offering “a brief description of AI”: “The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it” (my emphasis).
This seemingly innocuous parallel with human thinking, much like the phrase “artificial intelligence” itself, elides the vast gulf in capability between human minds and the current state of machine learning. Large language models (LLMs) are statistical inference algorithms. They predict the next likeliest thing in a sequence of things, such as words in a sentence. They produce what looks very much like human writing because they’ve been trained on vast quantities of human writing to predict what a human would write.
You’ll have already noticed that this is not even remotely similar to how you “understand, synthesize and generate knowledge.” You, like every human, have learned about the world by directly interacting with it. You’ve developed conceptions of physical objects such as trees and tables, of abstractions such as poverty and ethics, and of other people’s thoughts and feelings. You’ve learned to use language to talk about and process those conceptions, but language is just a layer for you, a way to share and refine your mental picture of the world. For LLMs, there is no mental picture; language is all there is.
To be sure, LLMs have made surprising leaps in ability recently, leading Microsoft researchers to claim that GPT-4, the latest model from OpenAI, contains “sparks” of general intelligence. And LLMs are not the only avenue of AI research. It can’t be ruled out that machines will eventually develop something more like our intelligence—though there are also good reasons to think it will end up being more alien than human.