person using a pen to write notes in front of a computer

How to Create a Generative AI Use Policy

Generative AI is a form of artificial intelligence that creates content — such as a piece of writing, audio, or an image — in response to some kind of instructions that you provide. If you've ever used ChatGPT, Google Gemini, or Microsoft Copilot, you've used a tool based on this sort of technology. It's useful for all sorts of things — from creating pieces of content to generating summaries of information — and it's only going to become more prevalent over time.

drawing of a man sitting in the countryside and using generative AI on his laptop

But as exciting as these new technologies are, they're also full of risks. For instance, the underlying technologies are not designed to handle sensitive data, nor do they check for inaccuracies or biases. Generative AI apps also raise lots of questions around intellectual property.

To protect your organization against these sorts of risks — and to ensure that your staff knows how to use generative AI apps in an ethical, lawful, and effective manner — consider putting together an AI use policy for your organization. Here's how to get started.

What Should an AI Use Policy Accomplish?

As with any other acceptable use policy, the overall goal of an AI use policy should be to offer the reader clear and digestible guidelines, the rationale that informs those guidelines, and a way for your staff to affirm their understanding of and compliance with the policy.

It should also outline the scope of the policy: Does it apply to staff, or does it also extend to volunteers, or even to clients who have approved access to organization devices? You'll want to explicitly spell all this out in clear, easy-to-understand language.

Start with Your Guiding Principles

Your policy should center around the key principles that you choose to adopt regarding your use of generative AI. Consider what you want those principles to be: It’s helpful to think in terms of both what you want to prevent and what you want to achieve. Do some research around the risks and opportunities associated with AI and establish your guiding principles from there.

For example, if you want to discover prompts that you can use with natural language processing tools such as ChatGPT, one of your guiding principles might be "share useful prompts with colleagues." If you want to prevent data leaks when using similar tools, another principle could be "do not share personally identifiable information."

Align It with Other Organizational Policies

As you draft your policy, consider how it aligns and intersects with your other organizational technology policies. Think about how your AI policy fits alongside your computer use policies, data security policies, and so on. Use this new policy to build upon your existing policies, such as computer use and data security.

Provide Clear, Actionable Guidelines for Use

Above all, an AI use policy needs to be actionable. Your policy needs to provide clear guidance on how generative AI tools should and should not be used at your organization. This can be as simple as a "dos and don'ts" list.

As you write your policy, consider who will be reading it and phrase it in a way that is simple to follow. Be clear and concise. Use direct language. And try to make a clear connection between your guiding principles and the actions you want your staff to take.

You may also want to include guidelines on how to mitigate some of the risks of using generative AI apps. Consider providing information on how to check for plagiarism or copyright infringement, as well as training on data security.

Include a Compliance Statement

To ensure that your team has read and understood the guidelines, include somewhere for them to sign. This encourages them to take personal responsibility for their AI on a day-to-day basis and gives you something to fall back on if you encounter issues going forward.

Join Quad and Get an AI Use Policy Template

Looking for a little more guidance? On Quad, you can find a full editable template for a generative AI use policy, as well as a wealth of other information and resources to help you make the most of artificial intelligence at your nonprofit.

Quad is a membership service by TechSoup. As members, nonprofit organizations have access to a dedicated community of nonprofits solving similar challenges. Nonprofits will also have access to unique content and events provided by experts, a dedicated member support team, and training courses made for nonprofits. This includes training courses dedicated to helping you understand AI tools. Eligible nonprofits can also benefit from reduced admin fees for products and services available in the TechSoup catalog. Get access to Quad for 10 people in your organization for $200 per year.

DISCOVER QUAD

Additional Resources

Top photo: Shutterstock