Search blog

Is your nonprofit thinking about using ChatGPT? Your first step is to do no harm

A person presenting the virtual 3D rendered projection of cyberspace and AI.

In this article, we unpack the lessons learned from a nonprofit’s chatbot project to dispense mental health advice that went wrong, and how nonprofits can reduce the risks when adopting ChatGPT and other generative AI (artificial intelligence) technologies.

Over the past five years of researching and writing about AI adoption for the nonprofit sector, we have argued for nonprofits to use AI ethically and responsibly—which means taking a human-centered, strategic, and reflective approach. It is the responsibility of senior management to plan the use of ChatGPT or AI carefully in order to ensure that no harm is done to people inside and outside of the organization. 

While researching and writing our book, The Smart Nonprofit: Staying Human-Centered in an Age of Automation, we found many examples of chatbots misbehaving in the world. While some incidents were entertaining, others were potentially damaging and taken offline. These included Microsoft’s Tay, Facebook’s Negotiation Bots, and Scatter Lab’s Luda.

Chatbots developed by nonprofits for programs and services have faced challenges at times as well. For example, DoNotPay, a legal services chatbot designed to challenge parking tickets and navigate small claims courts, faced criticism for potentially oversimplifying legal issues and providing inaccurate advice.  

The recent release of ChatGPT, an incredibly potent example of the power of AI tools to mimic conversations and empathy, mental health providers have also been tempted to substitute bots for people. This is happening despite the fact that clinicians are still trying to figure out effective and safe use cases.

The demand for mental health hotlines skyrocketed during the pandemic and has continued ever since. At the same time, finding human counselors has been a struggle. Replacing staff or volunteer counselors with bots to save money has been viewed as a quick fix for some organizations.    

The most recent example comes from the National Eating Disorders Association (NEDA), the largest nonprofit organization dedicated to supporting individuals and families affected by eating disorders. For over 20 years, its hotline has provided advice and help to hundreds of thousands of people. Regardless of the technology (chat, phone, or text), a caller was always connected to a human being for advice. It is not unusual for hotline staff to talk to people who are suicidal, dealing with abuse, or experiencing some kind of medical emergency. 

The COVID-19 pandemic created a surge in hotline calls of over 100%. NEDA employees and approximately 200 volunteers could barely keep up with demand. In response, NEDA’s leadership unveiled Tessa, an AI-driven chatbot, and promptly handed out pink slips to hotline staff.

Replacing people this way is a dreadful idea on multiple levels.

First, a chatbot alone cannot understand the nuances of the pain hotline callers are in, nor can it provide the kind of empathy these callers need and deserve. This is not a hypothetical concern; one user reported trying the chatbot and stated, “[The chatbot] gave links and resources that were completely unrelated” to her questions. For counseling situations, body language and tone are important in traditional therapy and support; chatbots, which interact with text virtually, cannot (yet) observe or understand nonverbal communication. 

Second, chatbots and other AI technologies need to be carefully designed and tested for human interaction and bias to mitigate risks. Even if a chatbot was a good idea for providing sensitive counseling support, it does not appear that this chatbot was well designed or adequately tested. 

Third, AI technology should never be used primarily as a way for management to reduce headcount. As we wrote in The Smart Nonprofit, the goal of using AI-powered technology is to augment human efforts, not replace them. Co-botting is the sweet spot created when people do deeply human activities and bots perform rote tasks learned over millions of attempts with no harm.

Take the do no harm pledge

Artificial intelligence used for mission-driven work must be held to the highest ethical and responsible standards. As the example of NEDA’s chatbot Tessa clearly shows, AI cannot and should not be considered an inexpensive replacement for human staff or volunteers. The benefits the technology offers can only be achieved when an organization embraces a human-centered, “do no harm” approach. 

Here are the first steps for using AI in ways that do not cause harm:

  • Stay human-centered: Responsible adoption requires informed, careful, strategic thought to ensure the technology is used to enhance our humanity and enable people to do the kinds of relational, empathetic, and problem-solving activities we do best. Responsible use also requires people to be in charge of the bots at all times.
  • Lead with generative AI literacy: Increase your organization’s, including senior leadership’s, AI literacy. People in your organization do not need to know how to code, but they do need to understand what a chatbot is and isn’t capable of doing.  In the NEDA example, the team should have known about the potential danger of chatbots going off the rails. A simple query to ChatGPT about its potential dangers generated a half-dozen examples. Two of the examples it cited were made up, further reinforcing the idea that ChatGPT can give inaccurate information without proper training and supervision. 
  • Think through co-botting use cases: AI should be used sparingly and carefully. This begins by identifying specific pain points that are bottlenecks for the organization. The next step is to outline exactly what tasks and decision-making people will retain as well as what tasks AI will do or what will be automated and requires human oversight to prevent harm. It is also important to solve the right problem. For example, The Trevor Project uses a Chatbot as a training simulator to train more volunteers to help scale demand for services. The bots never interact with people seeking out advice from the helpline, and the organization did not reduce headcount.
  • Test, test, test: Generative AI is not a pot roast—you can’t set it and forget it. Chatbots need to be vigorously tested to mitigate risks. Humans need to check the answers the bots are providing for accuracy, bias, and hallucinations. They also need to put safeguards in place, especially if the chatbot is being used to provide advice for sensitive situations. The nonprofit and its technical partners also need to invest in ethical AI training, so they’re prepared to evaluate the chatbot’s interactions. 

There are a lot of decisions to be made in order to use ChatGPT responsibly. There are also important mistakes to avoid. We will save you and your organization an enormous amount of time and pain by sharing this: It is important NOT to wait for something bad to happen before looking for warning signs of potential harm. Do your homework, think through the process carefully, take your time, and above all, do your best to reduce risks. If your nonprofit understands where threats may be lurking, ill-understood, or simply unidentified, you have a better chance of catching them before they catch up with your nonprofit. 

Tags:

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • SEMA says:

    July 4, 2023 2:45 am

    Yes, as a nonprofit, we are considering using ChatGPT. Our first priority is to ensure that it does not cause any harm.
    https://sema-sy.org/

  • Coffee Funded says:

    June 26, 2023 4:30 pm

    Thanks for this information. Making decisions about AI is hard right now so the more information the better. Thank you for this.
    Coffee Funded
    www.coffeefunded.com

  • Peter Oyaro says:

    June 17, 2023 11:03 am

    I as chairman of Global disaster management and environmental mitigation foundation, is so much impressed with your work ,and I'm looking forward to working with you.