Uncovering Critical Security Flaws in ChatGPT Plugins: A Threat to User Accounts?

Salt Security has discovered critical security flaws in ChatGPT plugins that could allow attackers to access users' third-party accounts and sensitive data. The vulnerabilities were found in the plugin functionality of ChatGPT, which allows the AI chatbot to interact with external services. The flaws could introduce a new attack vector and enable threat actors to gain control of users' accounts on third-party websites and access to personally identifiable information (PII) and other sensitive user data stored within third-party applications. The discovery highlights the importance of protecting the plugins within generative AI tools to prevent attackers from accessing critical business assets and executing account takeovers. The Salt Labs team identified three distinct types of vulnerabilities in their investigation within the ChatGPT plugins, including a flaw within ChatGPT itself, a vulnerability in PluginLab (pluginlab.ai), and susceptibility to OAuth redirection manipulation in multiple plugins. The findings come weeks after Imperva detailed two cross-site scripting (XSS) vulnerabilities in ChatGPT and several of its third-party plugins that risked leakage of user conversations and other account contents. The vulnerabilities were reported to ChatGPT and plugin developers in July and September 2023, respectively, and have since been resolved. Users are advised to update their apps as soon as possible, and developers are recommended to be more aware of the internals of the generative AI (GenAI) ecosystem to prevent similar vulnerabilities in the future.

Related reads

Popular posts from this blog

Meta CTO Reveals Latest Updates on AR Glasses: AI-Powered and Beyond!

GTA 6: The Most Anticipated Game Release in History?

Google Unveils Revolutionary Password Security Feature for iPhone Users