Resources

What is the security risk of AI?

The metaphorical AI genie is out of the bottle, and even if people wanted to put it back, it wouldn’t be possible.

It’s been just over six months since OpenAI’s ChatGPT LLM (Large Language Model) machine-learning chatbot erupted into the public consciousness (no pun intended). Bard from Google quickly followed, as well as many tools built by Microsoft using OpenAI technology, and the opening salvo of an increasing number of web and mobile applications that incorporate LLM technology into their products. The metaphorical AI genie is out of the bottle, and even if people wanted to put it back, it wouldn’t be possible.

The discussions and pronouncements around ChatGPT and other LLM-based solutions have been loud and, at times, tending toward the hyperbolic. Debates on whether LLMs are going to destroy the world are for other forums (spoiler alert: they aren’t), but there are cybersecurity risks associated with the general availability of LLM-based solutions that do need consideration and addressed.

What Are the Real Risks of LLM AI Systems?

Not all of the risks that flow from the use of LLMs should be dismissed. There are genuine issues they raise that need to be addressed. We discussed some of the emerging threats from ChatGPT and other LLMs as the hype cycle was still in full swing during February in an article titled ChatGPT - Potentially Enabling Bad Actors (see reference 1 below for a link to that article). The topics examined in that article are still relevant, but in the time since as more people have started to use LLM-based tools, additional risks have emerged. We discuss these below.

Personal Identifiable Information Data Exposure

Most organizations expend considerable efforts protecting the data they hold on citizens in their jurisdiction or their customers if they are a business. Having this data exposed via a data breach as part of ransomware or another cyberattack can have serious reputational and financial repercussions. This is especially true since the advent of industry and Governmental regulations like the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS), and the California Privacy Protection Act (CPPA).

Now that publicly available LLMs like ChatGPT are within everyone’s reach, it will be tempting for information workers to use the tools to generate personalized emails for customer queries and other work. Part of this process could involve including Personally Identifiable Information (PII) or other sensitive data in the prompt entered into the LLM to generate the response the worker wants. While many LLM providers say that they do not share entered information with other users of their tool and don’t retain it beyond each user session, we all know that mistakes happen and no software is bug-free. There is an unquantifiable risk of retention of restricted or sensitive data entered into an LLM that is then stolen by hackers or accidentally exposed in response to someone else’s use of the LLM. On the day this article was written, it was reported that 100,000 ChatGPT accounts had been stolen and sold on the dark web. Showing that OpenAI is as susceptible to cyberattacks as other organizations and that data and accounts they control can be stolen (ref 2).

Leaking of Company Secrets

In addition to the risk to PII, there is also a risk of intellectual property information exposure if used in an LLM. Samsung Semiconductor recently fell victim to this when their developers used ChatGPT to help change some source code. As part of this, the developers inputted confidential data and the proprietary source code they were amending (see ref 2 3 for more). Samsung banned the use of ChatGPT after this incident.

Many other companies have restricted or banned the use of LLMs internally to prevent the issue that happened in Samsung Semiconductor. Those companies reported as banning ChatGPT (and presumably other LLMS) include Apple, Deutsche Bank, Northrop Grumman, and Verizon, in addition to Samsung. Others, such as JPMorgan Chase, Accenture, and Amazon, have restricted use internally (see ref 3 4 for more).

Italy became the first Western country to block the use of ChatGPT entirely after concerns expressed by the Italian data protection authority over privacy and the lack of age restrictions required by Italian law for accessing similar systems. The ban got lifted after OpenAI made changes to comply (see ref 45).

Generated Code Vulnerabilities

Another issue with using LLMs to help developers create or update source code is the discovery that the code generated is often not very good and frequently includes known cybersecurity vulnerabilities. Research by academics from New York University and the University of Calgary found that the GitHub Copilot “AI pair programmer” generated code with known MITRE ATT@CK framework vulnerabilities in 40% of the 1,689 tests they got Copilot to complete (ref 6).

Copilot is an LLM trained on open-source code stored in GitHub. It uses the same underlying OpenAI technology as ChatGPT, and it’s probably safe to assume that ChatGPT would have a vulnerability rate similar to Copilot when used to generate code. Other LLMs, such as Google Bard, are likely to introduce known (and unknown!) cybersecurity vulnerabilities into their generated code.

It’s true that there are tools available that integrate into the development workflow and programmer’s IDEs that can check any source code for known and common vulnerabilities. But it’s vital that anyone using LLM-based code assistants knows that the code generated needs checking by both automated tools and expert human developers in code reviews before it gets deployed to production systems.

How to Mitigate the Risks from LLMs

Here are some steps that organizations can take to mitigate the risks that LLMs like ChatGPT introduce. The points below are not a comprehensive strategy to eliminate the risk.

The most draconian step is blocking access to ChatGPT and other LLMs from corporate networks and supplied devices. This will become increasingly difficult as all the major tech companies integrate LLM-based functionality into their core offerings. If you use Microsoft Office or Google Workspace, it’ll have the tools built in, and you won’t be able to block access globally. Hopefully, the administration consoles will allow access to be restricted.

Blocking on corporate IT also doesn’t address the issue of shadow IT due to people using personal smartphones and tablets for work tasks via their own private data connections on the mobile network. But shadow IT needs to be addressed for lots of other reasons, irrespective of the potential for ChatGPT data leakage.

It may be the case that using an LLM tool is beneficial to an organization, and their use is approved and sanctioned. If this is the case, it’s imperative to research the vendor’s data privacy policies and their history around data security, just like you should do for any other vendor and software supplier you work with.

Training is essential in preventing many cyberattacks that start with a human being tricked or making a mistake. Phishing is the conical example, but we can now add training and awareness about what data should be inputted into an LLM via a prompt when asking them to generate text.

Anonymization of data should occur before use with an LLM. Suppose your customer support team uses it to generate customer emails or other responses. In that case, they should not use PII in the prompts used to generate responses. Use placeholders and dummy account details and have a human operator replace these before the email or other response goes to the customer.

Keep data protection policies and advice to all staff up to date. The information worker technology landscape is changing as rapidly as the cybersecurity threat landscape. No parts of the current government or business sectors can escape this rapid change. Just as your cybersecurity plans need constant review and updating to reflect the current threats, your data protection and use guidelines for staff also need frequent reviews and updates to include new technologies and services into which people can enter data.

 

References

  1. Critical Insight: ChatGPT - Potentially Enabling Bad Actors - https://www.criticalinsight.com/resources/news/article/chatgpt-potentially-enabling-bad-actors-2
  2. TechRadar Pro: Over 100,000 ChatGPT accounts stolen and sold on dark web - https://www.techradar.com/pro/over-100000-chatgpt-accounts-stolen-and-sold-on-dark-web
  3. TechRadar Pro: Samsung workers made a major error by using ChatGPT - https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt
  4. HR Brew: These companies have banned or limited ChatGPT at work - https://www.hr-brew.com/stories/2023/05/11/these-companies-have-banned-chatgpt-in-the-office
  5. BBC: ChatGPT accessible again in Italy - https://www.bbc.co.uk/news/technology-65431914
  6. arXiv: Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions - https://arxiv.org/abs/2108.09293