Skip to content

Blog

How to Build Your LLM Policy

As the efficacy and sophistication of statistical modeling has evolved into modern machine learning, generative AI has emerged as a major opportunity that’s garnered considerable interest. Employees know the available tools and potential of generative AI and want to take advantage of it.

While seemingly every organization is exploring how to reap the benefits of large language models, it’s a good idea for companies to proactively invest the time to develop a meaningful large language model (LLM) policy to encourage innovation in a secure setting.

The challenge with these models, as with machine learning in general, is that their use can involve consolidating and centralizing an enormous amount of data from across the business. From a security and compliance perspective, this creates a need to develop robust LLM policies that can mitigate risk without restricting progress.     

Why You Need an LLM Policy 

With the massive adoption and mainstream success of many generative AI tools, every business should be writing or implementing an LLM policy now. Research from Salesforce found that 3 in 5 workers (61%) already use generative AI or plan to do so. Yet, the same survey found nearly 3 in 4 respondents (73%) believe generative AI introduces new security risks. Whether a company is engaged and developing an AI strategy or not, people are already using these tools.

Traditionally, data has been stored across a variety of locations and systems, such as customer relationship management (CRM) software or human resource management systems (HRMS). Businesses often use data lakes, or central repositories for structured and unstructured data, to enable machine learning, analytics and other data processing capabilities. These lakes also create a single entry point for an adversary to gain access to a wide range of your business data, if not all of it.

One of the best ways to keep your business safe is making sure that every business engagement with an LLM considers security, compliance and any contractual obligations or agreements your company needs to follow. Often an effective approach begins by writing open-ended policies that safely enable experimentation to better understand how generative AI can make an impact. Some companies will need more narrowly defined policies, such as healthcare and finance, while others may be less restrictive to encourage innovation. You can start by focusing on your company’s risk appetite and how you define acceptable AI use.

Consider the Landscape

The modern approach to data lakes is essentially dumping all of your unstructured data into one big bucket and building intelligence on top of that. From there, you can design custom algorithms to gain deeper insights into your customer base, for example, or build dashboards with tools like Amazon QuickSight. 

Generative AI is great at making content like that easily consumable. Analysis on an Excel spreadsheet, for example, could be completed immediately instead of taking hours, and you can generate an in-depth report on many topics within seconds. The benefits need to outweigh the risks, however, and employees need to understand the company’s position on opportunities and limitations. 

Develop Your Policy 

A well-written LLM policy focuses on enabling innovation while balancing your desire to protect your data and other important considerations, such as fulfilling contractual obligations, maintaining client trust and supporting internal initiatives like carbon neutrality or diversity, equity and inclusion. For example, a business may want to consider minimizing the computational resources required by LLMs or optimizing energy consumption to avoid contradicting sustainability efforts. Similarly, the business might implement additional safeguards to ensure hiring decisions aren’t based on biased information. Remember that the goal of your policy is to mitigate risk without restricting progress or innovation.

Typically, it’s best to avoid adding unnecessary complications and red tape, and instead prioritize what’s really necessary to keep your business safe. This can allow you to start small and refine your policy over time. As we developed our LLM policy at Mission Cloud, we found the best approach was to focus on three core areas: data protection, human review and auditable processes.

Data Protection

Data protection is usually at the heart of an LLM policy, including how your business shares data with AI and how models consume the data. There are certain legal and contractual obligations you may need to consider when it comes to the data you share. For example, if you want to ask the AI a question about how to create an Amazon EC2 instance in Terraform or you’re looking for recipes for scones, it's not a problem because the information is publicly available. 

However, if you start sharing internal IP, such as a list of customers or log files from a customer server, you may need to determine the right contractual and legal structures to protect your data. Depending on where you're based, this could involve complying with regulations such as the California Consumer Privacy Act (CCPA) or the General Data Protection Regulation (GDPR). 

Conversations around the use of LLM platforms can be similar to any other discussions about sharing data with any third party. CISOs, CIOs and other technology leaders are responsible for creating guardrails and providing the necessary security while enabling the business to experiment within those guardrails. Most businesses aim to run as fast as possible while ensuring that data is secure. Rather than banning a particular provider or model, for example, you can develop a policy that outlines what’s required of any LLM the business uses.

At Mission Cloud, we require the same data controls for LLMs as our other providers. We have a contractual relationship with the provider, and the provider goes through security assessments in our procurement processes. We recommend clients take a similar approach to data protection. 

Human Review

AI can be incredibly helpful in streamlining processes, but it can also make mistakes or “hallucinate.” For example, chatbots confidently cite cases that don't exist, or reference academic papers that are completely fabricated. Including a human review component in your LLM policy can help ensure that any incorrect, biased or damaging output isn't accepted. 

Using AI can lead to mistakes, and the perspective of most organizations is that if you use an AI to help you solve a problem, you're still 100% responsible for the output. If the output is offensive or violates policy, then you, as the human who used the AI, will be held accountable. Similarly, if you ask the AI to do something you aren't qualified to do, such as writing source code in Python if you're not a Python developer, the best approach is to find someone who's qualified to review and verify the output. You can identify internal subject matter experts or establish a process to keep human experts in the loop and ensure accuracy.

Mission Cloud recommends  that you, as a human, be responsible for any content or output you use from an LLM. This is often known as "human review" or "human in the loop," and it provides another safety net to prevent mistakes and mitigate risk. You can't offload a problem to an AI and assume it will always produce the correct results. 

Auditable Processes

Auditable processes provide a way to track the decisions made by the LLM. This is especially important for automation and software that uses LLMs to make decisions, as it documents all of the information about the model, as well as the inputs provided by the process or employee.

Create a procedure that outlines the steps involved, including the version of the model used, the prompt provided to the LLM and the response that the LLM provided. It should also document how the response was used to achieve an objective. Consider the different types of responses that the model may provide, as it may not always be what's expected and it’s difficult to control from a security perspective.

Imagine a scenario where everyone in your organization has access to the same model, which was also trained on sensitive internal information. For instance, let's say you have data stored in Google Drive that is only shared with specific groups within your organization. If this data is used to train a custom model, there is a risk that the model's responses could inadvertently expose or disclose sensitive information to individuals who should not have access to it. Without guardrails in place, any employee could have access to that data, creating new opportunities for misuse, even if it’s unintentional.

Share Your Policy 

Once you've written your policy, you’ll still need to figure out how to communicate it to your team. Every business has different communication styles and preferences, so there’s no one-size-fits-all strategy. The key is to connect with all stakeholders and share the purpose and value of the policy. 

It’s usually ineffective to just send out a policy and expect everyone to read it – most people won't. An alternative approach is to try engaging people in the conversation and making them feel like you are there to help enable the thing they are trying to do. 

At Mission Cloud, we found success in creating a team of “security champions” who volunteer to be a part of the mission and get involved in security-related conversations. This team gets to see behind the scenes, comment on ‌policy and test out new tools. After incorporating the feedback of our Champions, we shared our policy with more of our staff, including engineering and non-technical team members. Finally, we hosted an informational lunch and learn for everyone to explain the policy and encourage people to ask questions and give feedback. 

Stay Updated

Learning how to build your LLM policy is one of the first steps toward a successful business relationship with generative AI. But, it can also be beneficial to leave room to evolve your policy along with the technology. Most organizations shouldn’t expect to write the policy once and be done. You can keep your policy updated and relevant by developing a plan to revisit it every six months or whatever recurring schedule works best for your organization. Now is the time to test use cases and poke holes in your policy to find where you need to modify and grow. 

Many businesses are testing models, looking for use cases and building their generative AI and LLM knowledge before making substantial investments or commitments. For those organizations, it’s best to work with an experienced AI partner, such as Mission Cloud, to understand available options, limitations and other considerations. 

As an AWS Premier Tier Services Partner with the AWS Machine Learning Competency, we have the technical expertise and LLM experience you can rely on to help you on your AI journey. Connect with a cloud advisor and discuss your generative AI goals.

 

FAQ

  1. How can organizations measure the effectiveness of their LLM policies in mitigating risks associated with generative AI?

Organizations can measure the effectiveness of their LLM policies by regularly auditing AI-driven outputs against expected compliance and security standards. Metrics such as incident frequency, response times to policy breaches, and user feedback on AI interactions provide concrete data on policy performance. Additionally, continuous training and updates based on evolving AI capabilities and regulatory demands can further refine these measures.

  1. What are the potential legal consequences of failing to adhere to a robust LLM policy, especially in industries with strict data protection regulations?
  2. Failing to adhere to a robust LLM policy can lead to significant legal consequences, especially in industries like healthcare and finance, where data protection is critical. Non-compliance can result in regulatory fines, legal disputes, and damage to reputation. Ensuring that LLM policies align with regulatory requirements and ethical standards is crucial to mitigate these risks.

Author Spotlight:

Jarret Raim

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.