The White House Already Knows How to Make AI Safer

The White House Already Knows How to Make AI Safer

Ever since the White House released the Blueprint for an AI Bill of Rights last fall (a document that I helped develop during my time at the Office of Science and Technology Policy), there’s been a steady drip of announcements from the executive branch, including requests for information, strategic plan drafts, and regulatory guidance. The latest entry in this policy pageant, announced last week, is that the White House got the CEOs of the most prominent AI-focused companies to voluntarily commit to being a little more careful about checking the systems they roll out.

There are a few sound practices within these commitments: We should carefully test AI systems for potential harms before deploying them; the results should be evaluated independently; and companies should focus on designing AI systems that are safe to begin with, rather than bolting safety features on after the fact. The problem is that these commitments are vague and voluntary. “Don’t be evil,” anyone?

Legislation is needed to ensure that private companies live up to their commitments. But we should not forget the federal market’s outsize influence on AI practices. As a large employer and user of AI technology, a major customer for AI systems, a regulator, and a source of funding for so many state-level actions, the federal government can make a real difference by changing how it acts, even in the absence of legislation.

If the government actually wants to make AI safer, it must issue the executive order promised at last week’s meeting, alongside specific guidance that the Office of Management and Budget—the most powerful office you’ve never heard of—will give to agencies. We don’t need innumerable hearings, forums, requests for information, or task forces to figure out what this executive order should say. Between the Blueprint and the AI risk management framework developed by the National Institute of Standards and Technology (NIST), we already have a road map for how the government should oversee the deployment of AI systems in order to maximize their ability to help people and minimize the likelihood that they cause harm.

The Blueprint and NIST frameworks are detailed and extensive and together add up to more than 130 pages. They lay out important practices for every stage of the process of developing these systems: how to involve all stakeholders (including the public and its representatives) in the design process; how to evaluate whether the system as designed will serve the needs of all—and whether it should be deployed at all; and how to test and independently evaluate for system safety, effectiveness, and bias mitigation prior to deployment. These frameworks also outline how to continually monitor systems after deployment to ensure that their behavior has not deteriorated. They stipulate that entities using AI systems must offer full disclosure of where they are being used and clear and intelligible explanations of why a system produces a particular prediction, outcome, or recommendation for an individual. The guidelines also describe mechanisms for individuals to appeal and request recourse in a timely manner when systems fail or produce unfavorable outcomes, and what an overarching governance structure for these systems should look like. All of these recommendations are backed by concrete implementation guidelines and reflect over a decade of research and development in responsible AI.

An executive order can enshrine these best practices in at least four ways. First, it could require all government agencies developing, using, or deploying AI systems that affect people’s lives and livelihoods to ensure that these systems comply with best practices. For example, the federal government might make use of AI to determine eligibility for public benefits and identify irregularities that might trigger an investigation. A recent study showed that IRS auditing algorithms might be implicated in disproportionately high audit rates for Black taxpayers. If the IRS were required to comply with these guidelines, it would have to address this issue promptly.

Add a Comment