Should My Nonprofit Use AI? Ultimate AI FAQ for Nonprofits

CapacityGenerative AITech + Tools

Artificial Intelligence (AI) has taken the world by storm, but as nonprofit professionals, we cannot sit by watching the world evolve without us. The current iteration of the AI boom, which went mainstream in 2020 and exploded toward the end of 2022, will revolutionize the internet, digital communication, and offline activities in ways we may not even realize. 

However, every nonprofit organization should heed that with any new technology, there are pros and cons, opportunities for great use and misuse, and complex questions about how this technology will shape employees’ day-to-day work or even the work itself. 

In this article, we will explain why everyone is talking about AI, what has changed over the past few years, and evaluate opportunities for nonprofits to adopt AI to improve their mission-aligned outcomes safely.

Jump to:


What is AI, and why should I care?

AI, or artificial intelligence, is not some device or specific technology that has suddenly appeared. It exists in the algorithms that govern your Facebook timeline, explains why YouTube always recommends the perfect video, or more recently, powers the Large Language Models (LLMs) like OpenAI’s Chat-GPT, Google’s Bard, or Bing’s AI Chatbot, and others that have made headlines.

So while “AI” has been with us for some time, even in our everyday lives, those last technologies have spurred something of an AI renaissance, a global tech upheaval, infinite Twitter musings, and hyperbolic press stories — depending, of course, on who you ask. 

What do these new AI tools do?

For the sake of simplicity, we will focus on OpenAI’s Chat-GPT as it is the most accessible (and as of writing most widely used) of these experimental AI platforms. 

In its essence, Chat-GPT is a chatbot that can answer questions, conjure recipes, write songs, troubleshoot code, and pretty much execute any other text-based task. It can pass the Bar Exam, diagnose medical conditions with alarming accuracy, or search the internet for you with the newest plug-ins feature. For the nonprofit communications professional, Chat-GPT can summarize meeting notes, create essay outlines, write emails, draft grant proposals, write social media posts—and provide a firm foundation for nearly any text-based assignment you can dream of.

AI is becoming the ultimate nonprofit personal assistant.

Not only can AI write text for you, but it can also conduct research. Take this example from The New Humanitarian, where reporters demonstrate how Chat-GPT can research and summarize how grassroots nonprofits in Ukraine can access grants and funding from the Canadian government. The possibilities are nearly endless for nonprofits; the key is understanding how AI usage can be maximized ethically to help your organization achieve its mission.

What Are The Limitations of AI Tools Like Chat-GPT?

Chat-GPT is a generative large language model. What does that mean? Chat-GPT doesn’t necessarily “know” these things it appears to be telling you; what it is doing is creating highly accurate assumptions based on predictive language that can, with a high degree of accuracy, relevance, and continuity, execute assignments that up until a year ago only humans could do.

There are several key elements of these models that nonprofits need to understand:

  1. LLMs are only as good as the content they are trained on. 
  2. LLM answers are dependent on the quality of the questions. Writing quality “questions” is known as “prompt architecture.
  3. Because LLM models are training on content (presumably) written by humans, they are subject to inaccuracies, prejudice, and bias.
  4. LLM models are subject to hallucinations; they may generate content that “sounds” right but is not “factually” accurate. Assume that 1 out of every 5 “facts” might be a lie and edit accordingly. 
  5. LLM models can be trained or coaxed into producing harmful content with prodding by humans.
  6. The investment in “trust and safety” (read: content moderation) among commercially available AI tools varies widely. Some companies have heavily invested in safety features and moderation, others less so. 

What Are Some “Do’s” When It Comes To Using AI At A Nonprofit?

  • DO talk to your employees about AI, and establish a plan or use policy for acceptable usage of AI tools within your organization. Statistically, at least one or several of your organization’s employees are using it or have begun experimenting with it. Ignoring this can lead to the Grey Jacket Problem.
  • DO brainstorm how AI can increase your organization’s productivity or creative output, whether writing web content, social media posts, grant proposals, or generating new ideas.
  • DO educate employees and yourself about the bias and inaccuracies AI can introduce, and require that content be reviewed for appropriateness.
  • DO encourage experimentation, innovation, research, and other creative ways to use, yes, chat-based AI tools, but also other AI tools that can generate photo-realistic images, videos, music, etc.
  • DO stay current on developments, news, policy, and other components related to AI development.

What Are Some “Dont’s” When It Comes To Using AI At A Nonprofit?

  • DO NOT ignore AI. Despite Silicon Valley’s penchant for hyperbole, AI has the power to reshape the internet and computer technology profoundly. Nonprofits, NGOs, and other organizations cannot afford to fall behind their for-profit counterparts!
  • DO NOT copy and paste text or any other response from an AI tool without thoroughly reviewing that content for appropriateness and factual accuracy—as well as ensuring no bias or hateful language has been generated.
  • DO NOT misuse AI, use AI for illegal or nefarious purposes, or instruct AI to generate harmful content or intentionally deceptive images. Never abuse AI in any way that can lead to digital or real-world harm.
  • DO NOT misappropriate AI for human emotion. Do not write condolence letters, emergency alerts, or other sensitive communications using AI. AI is intelligent, but it is not human. Nonprofits have a responsibility to uphold public trust. Do not let a computer become a replacement for tasks that require sensitivity or compassion.

What Are Some “Things To Consider” When It Comes To Using AI At A Nonprofit?

  • CONSIDER the ethical implications of aiding typically human tasks with Artificial Intelligence: will audiences respond differently to content or communications aided by AI? Is there a combination where AI aids human tasks instead of replacing them outright?
  • CONSIDER who or what is being replaced by AI-generated content, whether that content is text or visual-based. Asking AI to write text in the style of specific writers might be exploitative. Creating AI-generated photo-realistic creative might be a cost-efficient way for your organization to generate content, or it could be perceived as displacing photographers, visual artists, or models. Assess these tradeoffs with your organization’s mission in mind.
  • CONSIDER the implications of AI on communities, including those within and outside your organization’s sphere. AI technology and development will, for the foreseeable future, continue to grow into our daily ecosystems, and the benefits and detriments of this growth may not be realized yet. Consider, as your organization may have done with social media, the complicated social role that new technologies play in our daily lives.

Is AI content from GPT writers detectable?

Yes, using tools that leverage perplexity and burstiness measures are able to calculate the randomness of words and sentences. It quantifies how well a model predicts a given text by measuring the average uncertainty of its predictions. In other words, perplexity gauges how “surprised” or “confused” a model is by the words it encounters, based on the probability distribution it assigns to the possible next tokens. Another measure called burstiness does the same thing but for whole sentences.

Tools like GPTzero.me and Originality.ai offer free checks for text up to a limit for free. 

Does Google Search penalize AI Content?

Officially as of March 2023, Google came out with a post that stated that they technically allow AI generated content. In this post about AI and content by Google, they explicitly state:

Appropriate use of AI or automation is not against our guidelines. This means that it is not used to generate content primarily to manipulate search rankings, which is against our spam policies.

Google Policy 2023

However, it is important to note they mention that their SpamBrain system has many years of dealing with automated content and is still being used. This is where we think it is important to keep in mind the importance of creating actually unique and valuable content that help a user solve a problem or understand a topic.

Why does “perplexity” or “burstiness” in AI text matter?

Yeah, I guess we walked into that one from the last FAQ. It matters because if people use these AI tools out-of-the-box, the text they generate can be very detectable. This may become a way that search engines like Google begin to penalize content (though they don’t officially penalize AI content). More importantly, these measures signal bad writing and people will build up a collective pattern recognition for this content.  

In plain English: Imagine you’re playing a guessing game where you have to predict the next word in a sentence. If you’re really good at the game, you can usually guess the right word or at least come very close. If you’re not so good, you might be “perplexed” or confused about what word comes next.

Perplexity is a way to measure how good a computer program, like a language model, is at this guessing game. The lower the perplexity score, the better the computer is at guessing the next word in a sentence. So, a computer with a low perplexity score is like a super-smart friend who always seems to know the right word to complete a sentence.

Burstiness is like that but for sentences and it measures the variance of the sentences: length, style and structure. So if you have a whole page of 10-word sentences written in a monotone style would have a high burstiness and would likely be flagged as AI-generated. Similarly, a use of perplexity scores is to help determine if a body of text was created by a computer where a high perplexity score means the text may have been AI-generated. Here is a great deep-dive on burstiness and perplexity and you can scroll to the bottom to see these measures for this article 🙂

Scores for this Article of Perplexity and Burstiness from GPTzero.me