Severe ChatGPT Plug-in Security Flaws Leak Private Information

In this article:

After being patched, the vulnerabilities in ChatGPT plug-ins increased the likelihood of account takeover attempts and the theft of confidential information.

Unauthorized parties might get zero-click access to users’ accounts and services, including sensitive repositories on sites like GitHub, due to three security flaws found in the extension functions used by ChatGPT.

OpenAI’s popular generative AI chatbot can interact with external services through ChatGPT plug-ins and developer-published custom versions of ChatGPT. These add-ons grant access and permissions to execute tasks on third-party websites, such as GitHub and Google Drive.

During the installation of new plug-ins, ChatGPT leads users to the plug-in websites for code review, which is the first of three serious vulnerabilities that were discovered by researchers from Salt Labs. Attackers might use this to their advantage, tricking users into authorizing malicious code. This malware could then install unwanted plug-ins automatically and potentially compromise other accounts.

Second, as shown with the “AskTheCode” plug-in that links ChatGPT and GitHub, PluginLab, a platform for developing plugins, does not have adequate user authentication, which allows attackers to impersonate users and carry out account takeovers.

At last, the researchers at Salt discovered that certain plug-ins could be manipulated to reroute OAuth requests, which would let attackers inject malicious URLs, steal user credentials, and proceed with account takeovers.

According to the study, the concerns have been resolved and there is no proof that the vulnerabilities were exploited. Users are advised to update their programs promptly.

GenAI Safety Concerns Endanger a Huge Ecosystem
The study team’s findings may endanger hundreds of thousands of individuals and organizations, according to Yaniv Balmas, Salt Security’s VP of research.

“Security leaders at any organization must better understand the risk, so they should review what plug-ins and GPTs their company is using and what third-party accounts are exposed through those plug-ins and GPTs,” according to him. “As a starting point, we would suggest making a security review of their code.”

Balmas said that developers working on GPT and plug-ins should educate themselves on the inner workings of the GenAI ecosystem, including the security mechanisms in place, how to use them, and how to misuse them. Details like the data being transferred to GenAI and the rights granted to the GenAI platform or any third-party plug-ins (like GitHub or Google Drive) are part of this.

The results show there is a higher danger applicable to other GenAI platforms and many present and prospective GenAI plug-ins, according to Balmas, who notes that the Salt study team only evaluated a tiny part of this ecosystem.

To further mitigate these dangers, Balmas argues that OpenAI’s developer documentation should place a greater focus on security.

Security Concerns with GenAI Plug-ins Are Anticipated to Grow
The results from the Salt Lab indicate a greater security risk linked with GenAI plug-ins, according to Critical Start’s cyber threat intelligence research analyst Sarah Jones.

“As GenAI becomes more integrated with workflows, vulnerabilities in plug-ins could provide attackers with access to sensitive data or functionalities within various platforms,” according to her.

Since GenAI platforms and their ecosystems of plug-ins are becoming targets for hackers, stringent security requirements and frequent audits are necessary.

These vulnerabilities should force enterprises to strengthen their defenses, according to Darren Guccione, CEO and co-founder of Keeper Security. He calls them a “stark reminder” of the inherent security dangers associated with third-party apps.”As organizations rush to leverage AI to gain a competitive edge and enhance operational efficiency, the pressure to quickly implement these solutions should not take precedence over security evaluations and employee training,” according to him.

Software supply chain security has become more of a concern due to the rise of AI-enabled apps, and businesses have had to adjust their data governance rules and security measures to deal with these new threats.

He brings up the fact that more and more, workers are putting sensitive information into AI technologies, such as intellectual property, financial data, company strategy, and more; if a bad guy were to get their hands on this data, it could be disastrous for the company.

“An account takeover attack jeopardizing an employee’s GitHub account, or other sensitive accounts, could have equally damaging impacts,” warns him.

Facebook
Twitter
LinkedIn
WhatsApp