This story first appeared in Hot Pod Insider, The Verge’s newsletter about podcasting and the audio industry. Sign up here for more.
iHeartMedia tells employees to steer clear of ChatGPT
iHeartMedia tells employees to steer clear of ChatGPT
iHeartMedia is joining companies like Apple, Spotify, and Verizon in restricting employee use of OpenAI’s ChatGPT — as well as banning the use of the AI model on company devices. According to an internal memo obtained by Hot Pod, iHeartMedia CEO Bob Pittman and CFO Rich Bressler sent an email yesterday to company employees instructing them not to use ChatGPT in order to prevent data leaks of iHeartMedia’s proprietary information. The news was first reported by RBR.
The leaders wrote that “as tempting as it is,” iHeartMedia employees were not to use ChatGPT or similar AI tools on company devices or to do company work — or use any company documents on such platforms. This was in order to protect iHeart’s intellectual property and other confidential information, as well as that of its partners.
Of chief concern was the chance that ChatGPT could leak proprietary information that would be valuable to iHeartMedia’s competitors. While OpenAI’s chatbot is primarily trained on a set of data (including websites, Wikipedia articles, and other online records) that ends in 2021, the company also stores your conversations by default and uses them to train its AI systems.
Despite the ChatGPT ban, iHeartMedia has strongly embraced AI in the past (the broadcaster once rolled out AI DJs — which led to human layoffs). The company is currently working on its own set of AI tools to be deployed throughout the company.
“It will effectively train that AI so that anyone — even our competitors — can use it.”
“Although AI, including ChatGPT and other ‘conversational’ AIs, can be enormously helpful and truly transformative, we want to be smart about how we implement these tools to protect ourselves, our partners, our company’s information and our user data. For example, if you’re uploading iHeart information to an AI platform (like ChatGPT), it will effectively train that AI so that anyone — even our competitors — can use it, including all our competitive, proprietary information,” wrote the two leaders in the memo.
The company plans to roll out a set of “iHeart-specific” AI tools that would be designed for use internally, as opposed to a general market. The AI tools would include safeguards that would prevent iHeart’s confidential information and IP from being leaked to the public, the memo detailed. But until that time arises, employees who wish to consult ChatGPT or another third-party AI tool have to go through numerous approvals — in addition to getting an OK from iHeart’s legal and IT teams.
“In the meantime, to ensure our security around AI, and to ensure that we don’t harm our brands or our customers’ brands, or inadvertently disclose sensitive data, no engagement, development or specific project work which involves ChatGPT or other AI technology is permitted without explicit direction from your team lead. All projects will require an assessment of the business impact and value of the project, a plan for monitoring and evaluating and a prior documented approval from Legal and IT,” wrote Pittman and Bressler.
iHeartMedia joins a growing list of companies that are greatly restricting or banning the use of ChatGPT and other generative AI tools by employees — largely due to concerns that the chatbot could leak confidential information. Major tech companies such as Samsung, Apple, and Verizon have blocked the website or made it inaccessible from corporate systems due to fears it could leak IP. Similar to iHeart, Apple is working on its own AI tools and is wary of any proprietary data landing in the hands of competitors.
OpenAI has been working to address these concerns. You can now opt out of sharing your chat history with ChatGPT, and OpenAI is working on a set of tools that specifically cater to businesses and won’t train on their data by default.