You are here

Security Awareness – Potential Risks Posed from AI/LLM Plugins and Integrations for Data Leakage or Account Takeover

Security Awareness – Potential Risks Posed from AI/LLM Plugins and Integrations for Data Leakage or Account Takeover

Created: Thursday, March 14, 2024 - 10:37
Categories:
Cybersecurity, Security Preparedness

It’s difficult to keep track of AI or LLMs that employees may be using (sanctioned or shadow AI) or that leadership may be pressuring to adopt. Additionally, broad discouragement on using AI tools may force users to use “shadow AI” tools with unknown consequences. Nonetheless with the explosion of AI and the integrating AI functions, such as ChatGPT or other LLMs with external plugins, APIs, or extensions into collaboration platforms, it’s prudent to keep abreast of potential risks and vulnerabilities to organizational systems and how to reduce the risk.

A recent post at DarkReading discusses vulnerabilities found in ChatGPT plug-ins that heighten the risk of proprietary information being stolen and the threat of account takeover attacks. While the researchers note the issues have been fixed and there was no evidence of exploitation, there are practical risks to be aware.

Specifically, according to the post, Salt Labs researchers uncovered three critical vulnerabilities affecting ChatGPT, the first of which occurs during the installation of new plug-ins, when ChatGPT redirects users to plug-in websites for code approval. By exploiting this, attackers could trick users into approving malicious code, leading to automatic installation of unauthorized plug-ins and potential follow-on account compromise. For more, visit DarkReading and Salt Labs.

When it comes to AI/LLM integrations, consider the following methods to reduce the risk (excerpted from HelpNetSecurity):

  • If the LLM is allowed to call external APIs, request user confirmation before executing potentially destructive actions.
  • Review LLM outputs before disparate systems are interconnected. Check them for potential vulnerabilities that could lead to risks like remote code execution (RCE).
  • Pay particular attention to scenarios in which these outputs facilitate interactions between different computer systems.
  • Implement robust security measures for all APIs involved in the interconnected system.
  • Use strong authentication and authorization protocols to protect against unauthorized access and data breaches.
  • Monitor API activity for anomalies and signs of suspicious behavior, such as unusual request patterns or attempts to exploit vulnerabilities.

Additionally, members may wish to have the OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist on hand, regardless of current policies governing AI/LLM usage.

“The OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist is for leaders across executive, tech, cybersecurity, privacy, compliance, and legal areas, DevSecOps, MLSecOps, and Cybersecurity teams and defenders. It is intended for people who are striving to stay ahead in the fast-moving Al world, aiming not just to leverage Al for corporate success but also to protect against the risks of hasty or insecure Al implementations.

This checklist is intended to help these technology and business leaders quickly understand the risks and benefits of using LLM, allowing them to focus on developing a comprehensive list of critical areas and tasks needed to defend and protect the organization as they develop a Large Language Model strategy.”

Additional Resources