OpenAI loses its trust and safety leader

OpenAI loses its trust and safety leader

OpenAI’s head of trust and safety, Dave Willner, is stepping down from the company as the AI industry faces more scrutiny around privacy and security from policymakers.

Willner has been working in trust and safety for nearly a decade. Before joining OpenAI, Willner was head of trust and safety at childcare startup Otter and led trust and community policy at Airbnb.

The FTC launched an investigation into OpenAI last week to determine if its training data collection methods violated consumer rights. The Securities and Exchange Commission also expressed concern that large AI models could centralize data and decision-making in finance, leading to a sort of groupthink akin to the events preceding the 2008 financial crisis. OpenAI, Meta, Microsoft, and other big names in the AI space signed a commitment with the White House this week to invest in security and discrimination research and a new watermarking system to identify AI-generated content.

OpenAI CEO Sam Altman has been vocal about AI regulations, calling on Congress to enact policies and, as reported by Bloomberg, backing an initiative to require licenses to develop powerful AI models. 

Willner said in a LinkedIn post that he decided to balance working with smaller companies and seeing his kids grow after attending the most recent TrustCon, a conference for trust and safety professionals.

“That moment of clarity allowed me to settle on a decision which probably feels counterintuitive to a lot of folks but feels incredibly right for me,” Willner said. “It ended up being a pretty easy choice to make, though not one that folks in my position often make so explicitly in public. I hope this post can serve to help normalize it.”

Add a Comment