AI Desperately Needs Global Oversight

AI Desperately Needs Global Oversight

Every time you post a photo, respond on social media, make a website, or possibly even send an email, your data is scraped, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few words. This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19 percent of workers may see at least half of their tasks impacted. We’re seeing an immediate labor market shift with image generation, too. In other words, the data you created may be putting you out of a job.

When a company builds its technology on a public resource—the internet—it’s sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test, or verify any aspect of the model. Some of these companies have received vast sums of funding from other major corporations to create commercial products. For some in the AI community, this is a dangerous sign that these companies are going to seek profits above public benefit.

Code transparency alone is unlikely to ensure that these generative AI models serve the public good. There is little conceivable immediate benefit to a journalist, policy analyst, or accountant (all “high exposure” professions according to the OpenAI study) if the data underpinning an LLM is available. We increasingly have laws, like the Digital Services Act, that would require some of these companies to open their code and data for expert auditor review. And open source code can sometimes enable malicious actors, allowing hackers to subvert safety precautions that companies are building in. Transparency is a laudable objective, but that alone won’t ensure that generative AI is used to better society.

In order to truly create public benefit, we need mechanisms of accountability. The world needs a generative AI global governance body to solve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is willing or able to do. There is already precedent for global cooperation by companies and countries to hold themselves accountable for technological outcomes. We have examples of independent, well-funded expert groups and organizations that can make decisions on behalf of the public good. An entity like this is tasked with thinking of benefits to humanity. Let’s build on these ideas to tackle the fundamental issues that generative AI is already surfacing.

In the nuclear proliferation era after World War II, for example, there was a credible and significant fear of nuclear technologies gone rogue. The widespread belief that society had to act collectively to avoid global disaster echoes many of the discussions today around generative AI models. In response, countries around the world, led by the US and under the guidance of the United Nations, convened to form the International Atomic Energy Agency (IAEA), an independent body free of government and corporate affiliation that would provide solutions to the far-reaching ramifications and seemingly infinite capabilities of nuclear technologies. It operates in three main areas: nuclear energy, nuclear safety and security, and safeguards. For instance, after the Fukushima disaster in 2011 it provided critical resources, education, testing, and impact reports, and helped to ensure ongoing nuclear safety. However, the agency is limited: It relies on member states to voluntarily comply with its standards and guidelines, and on their cooperation and assistance to carry out its mission.

In tech, Facebook’s Oversight Board is one working attempt at balancing transparency with accountability. The Board members are an interdisciplinary global group, and their judgments, such as overturning a decision made by Facebook to remove a post that depicted sexual harassment in India, are binding. This model isn’t perfect either; there are accusations of corporate capture, as the board is funded solely by Meta, can only hear cases that Facebook itself refers, and is limited to content takedowns, rather than addressing more systemic issues such as algorithms or moderation policies.

However flawed, both of these examples provide a starting point for what an AI global governance body might look like. An organization like this should be a consolidated ongoing effort with expert advisory and collaborations, like the IAEA, rather than a secondary project for people with other full-time jobs. Like the Facebook Oversight Board, it should receive advisory input and guidance from industry, but have the capacity to make independent binding decisions that companies must comply with.

This generative AI global governance body should be funded via unrestricted funds (in other words, no strings attached) by all of the companies engaged in at-scale generation and use of generative AI of any form. It should cover all aspects of generative AI models, including their development, deployment, and use as it relates to the public good. It should build upon tangible recommendations from civil society and academic organizations, and have the authority to enforce its decisions, including the power to require changes in the design or use of generative AI models, or even halt their use altogether if necessary. Finally, this group should address reparations for the sweeping changes that may come, job loss, a rise in misinformation, and the potential for inhibiting free and fair elections potentially among them. This is not a group for research alone; this is a group for action.

Today, we have to rely on companies to do the right thing, but aligning the greater good with stakeholder incentives has proven to be insufficient. With this structure, the AI oversight group would be positioned to take action like corporations can, but with the purpose of public good. Here’s one example of how. First, through secure data-sharing, it could do research currently conducted by these companies. The OpenAI economic harms paper, while admirable, should be the remit of an impartial third party rather than a corporation. Second, this group’s job is not just to identify problems, but to experiment with novel ways to fix them. Using the “tax” that corporations pay to join, this group might set up an education or living support fund for displaced workers that people can apply for to supplement unemployment benefits, or a universal basic income based on income levels, regardless of employment status, or proportional payout compared to the data that could be attributed to you as a contributing member of digital society. Finally, based on collaboration with civil society, governments, and the companies themselves, it would be empowered to take action, perhaps requiring companies to slow down implementation in particularly high-impact industries and support job transition programs.

The issues that generative AI developments raise are difficult to grapple with meaningfully, and as a society we currently lack the means to address them at the speed and scale at which new technology is being thrust upon us. Generative AI companies have the responsibility to entrust an independent body speaking on behalf of the world to make critical decisions on governance and impact.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.

Add a Comment