Search

5 Ways For Civil Society To Engage With AI

generative-ai-your-npo-how-chatgpt-will-change-the-way-npos-work-webinar-the-nonrpofit-times

We all woke up one morning to the possibility of chatting with our computers to access information, and generative AI suddenly seemed to be everywhere, ushering in a transformative experience. However, we quickly learned its limitations and memorized the necessary caveats.

Stories surfaced reminding us of its pitfalls — from a lawyer citing fabricated court cases (https://bit.ly/3uQGf6o) to harmful advice delivered by an AI bot replacing a nonprofit’s helpline (https://bit.ly/3SNuO7D).

The steady drumbeat of new features continues. AI is now a core part of Microsoft’s suite. At Google, Bard becames Gemini. Everywhere, we find AI in the tools we’ve been using for years offering to summarize threads for us. The question now is — what do we do with all this?  

The answer: Use it. 

There are many access points to this technology. Free-to-you versions include Copilot (formerly Bing Chat and Bing Chat Enterprise) from Microsoft, ChatGPT 3.5 from Open AI, and (as mentioned above) Gemini from Google. Microsoft offers Copilot for Microsoft 365 as a paid add-on, Open AI offers ChatGPT 4 as a paid subscription, and Google is making Gemini Advanced available for a monthly fee.

Many of these tools might already be available to you as AI add-ons in Zoom, Slack, and many other solutions. The truth is that you will encounter AI in a wide variety of places, and you need to make the decisions about how it works best for you — but don’t just avoid it.

Engaging with AI thoughtfully is part of what makes it better for the sector, and while there exist major factors around ethics, disclosure, and responsible use, AI presents the nonprofit sector with an opportunity to leverage powerful digital tools in service of enhancing the capacity to do good in the world.

Here are a few uses to consider, along with some thoughts on how to improve AI tools — and the responses they generate.

  1. Personal And Organizational Productivity

AI excels in structuring documents where the content is noncontroversial and exists all over the Internet. Do you need a product manager job description? The initial draft will usually be pretty solid and ready for you to customize. Do you want to understand the difference between a task force, a working group, and a project team? AI can assist.

Push further by requesting editorial feedback. Paste in a concept note and ask it to identify potential logic flaws, even suggest fixes. Key to maximizing AI’s personal productivity benefit is stepwise prompting. First communicate the intended structure, tone, and audience of the item you’re producing. Then, make section-specific requests and correct as you go, similar to guiding a talented intern. AI learns from your feedback.

A prompt library is key to ensuring AI response consistency. Well-structured prompts will help you get the most out of any generative AI tool you are using. It’s best to document common prompts and keep these organized in a central location. Also, create a process for reviewing prompts and providing feedback on what’s working — and what isn’t. 

  1. Wrangling Data With AI

While content creation often dominates discussions about generative AI, diverse AI tools can aid in data organization. Many have been subtly lurking for years as features in productivity suites: tools that search for colleague-related emails, files, and meetings, or those that suggest design elements in presentation software. But, these capabilities easily extend to rows and columns of data.

Describe your data and request suggestions for your organization from generative AI. Ask for visualization recommendations to enhance data understanding. Seek assistance with spreadsheet manipulation tasks, including possible formulas. You can even ask for responses in tabular form, which you can then copy and paste directly in a spreadsheet.

Depending on the tools used, AI can support identifying anomalies in large datasets, such as data segments that deviate from broader trends. 

Similar to visualizations, AI can help uncover key connections in your organizational data and isolate them for further analysis by knowledgeable humans. These might be tasks you already do, just significantly faster.

  1. Understand The Tools You Choose For Your Organization

Whether it’s a preexisting feature, a newly licensed add-on, or a standalone product harnessing generative AI, invest time in understanding its terms of service, data privacy expectations, and how your usage contributes to overall tool improvement.

This isn’t novel. It’s crucial for any product involving sensitive data, be it your organization’s intellectual property or donor information. With AI, however, potential error consequences magnify.

What to watch for:

  • Restricted data segregation: Can you compartmentalize sensitive data?
  • Data protection transparency: Clear explanations on how prompt content and other input are protected.
  • API usage transparency: For novel tools, understand any underlying APIs and potential data connections. Trace incoming data sources and destinations for your prompts.
  1. Be Selective In The Use Of Large Language Models

We all have heard it: Generative AI is predicting the most likely response to your prompt based on what it has already consumed. To borrow from Emily M. Bender and Alexander Koller (https://bit.ly/49CEeKg), generative AI is a hyper-intelligent octopus. It can mimic based on what it has learned, but it cannot deal with genuine novelty. Generative AI tools are trained on large amounts of data to understand the most likely best response.

Therefore, the underlying large language model (LLM) matters as it shapes the “most likely” predictions. So, investigate the AI tools you use:

  • Which LLMs do they use?
  • What are their limitations?
  • Where might they fail in your specific context?
  • Can you restrict them using custom tools or data sources?
  • Can you direct them towards your own data for “most likely” predictions?

The Tech Policy Lab at the University of Washington has developed a practice and guidance around writing data statements (https://bit.ly/3wuMQnJ). This work provides a way to examine large language models, as well as describe them so that others can benefit from that description.

  1. Feed LLMs With More Inclusive Data By Publishing Your Organization’s Important Work 

This last one is a bit different from the first four. LLMs draw on published content — words, images, code snippets. Much of this originates from individuals privileged with long-term Internet access. Relying solely on LLMs’ “most likely” responses risks amplifying existing, potentially exclusionary, harmful, or biased content. Directing your chosen tool to adjust word choice, tone, and the facts it provides improves the way the tool itself works. 

But also, publishing civil society work with meticulous attention to detail and context, often invisible to others, fosters diversity for future LLMs. This includes publishing in various languages, describing and sharing lived experiences within your community, and providing data with precise, descriptive metadata.

This might spark worry about exploitation and wealth creation for entities outside your service range, perhaps fueling more personalized ads or targeted political content. However, the alternative presents a greater danger: the increasing invisibility of your community’s voices within the dominant narrative, especially given the rapid innovation in this field.

Improving And Demystifying AI

These tactics for using, interrogating, and influencing our AI tools help to demystify them. The difficulty is taking this on, organization by organization. There is an opportunity for us to do this as a sector. We can facilitate sessions that examine LLMs, and share the results of those sessions with the broader community. We can contribute to existing prompt libraries and build new ones when needed. 

We can examine terms of service and privacy implications associated with different AI tools, and share the results in a common rubric. This empowers individual decision-making through shared resources. These actions feel key to taking advantage of generative AI as well as informing its future.

*****

Generative AI Disclaimer: The original draft of this article was fed into Gemini with the prompt “Please copy edit the following text.” Many suggested edits were rejected in favor of previous wording; however, some restructured sentences made the points in a clearer fashion. Version 2 was entered into the same chat with the prompt “Thank you for the help. I have taken some, but not all, of your suggestions. Please review this version and list any potential flaws in logic.” A few edits were made for clarity based on the responses. Humans took over the editing at that point.

*****

Marnie Webb is chief community impact officer for TechSoup and leads Caravan Studios. She is also a contributing editor to The NonProfit Times. Her email is [email protected]