ChatGPT Plugins Pose Security Risks

ChatGPT Plugins Pose Security Risks

“While ChatGPT plugins are developed externally to OpenAI, we aim to provide a library of third-party plugins that our users can trust,” Felix says, adding it is “exploring” ways to make plugins safer for people using them. “For example, making it easier to provide a user confirmation flow if they intend for their plugin to take a significant action.” OpenAI has removed at least one plugin—which created entries on a developer’s GitHub page without asking the users’ permission—for breaching its policy of requiring confirmation before taking action.

Unlike on Apple and Google’s app stores, ChatGPT’s plugin library currently doesn’t appear to list the developers behind the plugin or provide any information about how they may use any data collected the plugin collects. Developers creating plugins, according to OpenAI’s guidance, must follow its content guidelines and provide a manifest file, which includes contact information for the plugin’s creators, among other details. When searching for and turning on a plugin in ChatGPT, only its name, a short description, and logo are shown. (An unaffiliated third-party website shows more information).

When OpenAI launched plugins in March, researchers warned of potential security risks and the implications of connecting GPT-4 to the web. However, the issues with plugins aren’t confined to OpenAI and ChatGPT. Similar risks apply to any LLMs or generative AI systems connected to the web. It’s possible that plugins will play a big role in the way people use LLMs in the future. Microsoft, which has heavily invested in OpenAI, has said it will use the same standards for plugin creation as ChatGPT. “I think there's going to eventually be an incredibly rich ecosystem of plugins,” Microsoft’s chief technology officer Kevin Scott said in May.

Chang Kawaguchi, vice president of AI security at Microsoft, says the firm is taking an "iterative" approach to launching support for plugins in its AI Copilot assistant tool. "We'll extend our existing processes for publishing, validating, certifying, deploying, and managing product integrations to plugins, to ensure that customers of Microsoft Copilots have full control of their plugins, the data they can access, and the people authorized to deploy them," Kawaguchi says, adding the company will document security guidelines and work with external researchers on problems they find.

Many of the issues around plugins—and LLMs more widely—are around trust. This includes whether people can trust their private and corporate data with the systems and whether controls and measures are put in place to make sure what is handed over can’t be improperly used or accessed.

“You're potentially giving it the keys to the kingdom—access to your databases and other systems,” says Steve Wilson, chief product officer at Contrast Security and the lead of a project detailing security risks with LLMs. Around 450 security and AI experts have come together to create a list of the 10 top security threats around LLMs as part of the Open Worldwide Application Security Project (OWASP), according to Wilson, the project’s coordinator.

Add a Comment