Executive Order on A.I. Tries to Balance Technology’s Potential and Peril

Executive Order on A.I. Tries to Balance Technology’s Potential and Peril

The Biden administration, like other governments, has been under pressure to do something about the technology since late last year, when ChatGPT and other generative A.I. apps burst into public consciousness. A.I. companies have been sending executives to testify in front of Congress and briefing lawmakers on the technology’s promise and pitfalls, while activist groups have urged the federal government to crack down on A.I.’s dangerous uses, such as making new cyberweapons and creating misleading deepfakes.

In addition, a cultural battle has broken out in Silicon Valley, as some researchers and experts urge the A.I. industry to slow down, and others push for its full-throttle acceleration.

President Biden’s executive order tries to chart a middle path — allowing A.I. development to continue largely undisturbed while putting some modest rules in place, and signaling that the federal government intends to keep a close eye on the A.I. industry in the coming years. In contrast to social media, a technology that was allowed to grow unimpeded for more than a decade before regulators showed any interest in it, it shows that the Biden administration has no intent of letting A.I. fly under the radar.

The full executive order, which is more than 100 pages, appears to have a little something in it for almost everyone.

The most worried A.I. safety advocates — like those who signed an open letter this year claiming that A.I. poses a “risk of extinction” akin to pandemics and nuclear weapons — will be happy that the order imposes new requirements on the companies that build powerful A.I. systems.

Add a Comment