Meta to Require Political Advertisers to Disclose Use of A.I.
Meta is reckoning with a wave of A.I. tools that the public has embraced over the past year. As consumers have flocked to ChatGPT, Google Bard, Midjourney and other “generative A.I.” products, big tech companies such as Meta have had to rethink how to handle a new era of manipulated or outright false imagery, video and audio.
Political advertising has long been a contentious issue for Meta. In 2016, Facebook was criticized for a lack of oversight after Russians used the social network’s ads to sow discontent among Americans. Since then, Mark Zuckerberg, Meta’s founder and chief executive, has spent billions of dollars working to tamp down disinformation and misinformation on the company’s platforms and has hired independent contractors to closely monitor political ads that go through the system.
The company has also not shied away from allowing politicians to lie in ads on the platform, which Mr. Zuckerberg has defended on the grounds of free speech and public discourse. Meta has also shown reluctance to limit the speech of elected officials. Nick Clegg, Meta’s president of global affairs, has called for regulatory guidance on such issues instead of having tech companies determine the rules.
Those who run political ads on Meta are currently required to complete an authorization process and include a “paid for by” disclaimer on the ads, which are stored in the company’s public Ad Library for seven years so journalists and academics can study them.
When Meta’s new A.I. policy goes into effect next year, political campaigns and marketers will be asked to disclose whether they used A.I. tools to alter the ads. If they have and the ad is accepted, the company will run it with the information that it was created with A.I. tools. Meta said it would not require advertisers to disclose alterations that were “inconsequential or immaterial to the claim, assertion or issue raised,” such as photo retouching and image cropping.