Why should I create an AUP for AI?

Generative AI tools like OpenAI’s ChatGPT LLM (Large Language Model) chatbot have gone from nowhere to virtually ubiquitous in seven months. Many other LLMs, such as Google Bard, have followed in the wake of ChatGPT, as have many new and updated products using the OpenAI API to embed the LLM technology into multiple products. Some are more useful and appropriate than others.

Risks associated with LLMs

There can be no doubt that the metaphorical AI genie is out of the bottle, and even if we wanted to put it back, it wouldn’t be possible. People will make use of the available technologies to make their jobs easier. Some risks come along with the use of LLMs. We have outlined these risks in more detail in another article, but to summarize, the risks include the following:

  • Sensitive data exposure - When people enter text into an LLM to generate a response, there is a risk that they will input some Personally Identifiable Information (PII) or other sensitive data. This PII could be retained, viewed, and possibly leaked to others by the LLM system.
  • Intellectual property exposure - In addition to PII input to an LLM, people may enter commercially sensitive or proprietary information. Samsung Semiconductor workers recently did this when they used ChatGPT to help change some source code. They uploaded the non-public code they wanted to amend and private internal information for the LLM to consider when writing the code. Samsung banned the use of LLMs after this incident.
  • Vulnerabilities in generated code - Using LLMs with solutions like GitHub Copilot and ChatGPT has become common. This code is problematic. A study using 1600+ code generation tasks with Copilot found that 40% of the code generated had known MITRE ATT@CK vulnerabilities.

LLMs will have a place in business from now on. The first half of 2023 has shown an appetite for these tools. Leadership teams, specifically legal leadership, will need to determine where and how staff can use the tools.

Acceptable Use Policies

The rapid spread and use of LLM systems will introduce new risks to organizations. This risk needs to be considered and managed. There are technical steps that organizations can take, but a significant part of managing the risk will be user training about what data they should not input into the LLMs and also the creation and enforcement of an acceptable use policy (AUP).

Many organizations have AUPs covering other aspects of IT, such as laptop, smartphone, and internet use policies. These are in place to protect both the staff and the organization. Creating a new AUP to cover AI-based tools like ChatGPT and others is highly recommended and, at some point will be essential.

What Should an AUP for AI Systems Include?

If you have existing AUPs covering IT, what you need to include in an AUP covering AI will likely be similar. If you don’t, you’ll need to create a new one from scratch to cover the data and operations that are unique to your operations.

When considering what to include, we can build on the public work of others who have thought about this area. One such organization that has done that is the business legal firm Perkins Coie. They are one of the largest international law firms that operate across the USA and Asia-Pacific region. In June 2023, they published an article titled Ten Considerations for Developing an Effective Generative AI Use Policy (see ref 1 below). 

The Perkins Coie article is a good starting point when considering an AI AUP. We won’t replicate the information and advise it contains here, as you can read the original via ref 1. If you have any questions about AUP for AI or any other aspect of IT that might impact your security, then reach out to us, and we’ll be happy to arrange a chat with our expert team, who can advise you on what should be covered.


Perkins Coie: Ten Considerations for Developing an Effective Generative AI Use Policy -