Search blog

Doing good with AI tools: Navigating ethical considerations for the social sector 

Three people having a conversation in an office setting.

This is a time of accessible AI—anyone with an internet connection can use state-of-the-art tools to help with tasks like brainstorming ideas, generating art, and drafting documents. Ease of access means that with more people using these systems, the potential impact of unethical use grows. 

A recent Salesforce survey found that more than a quarter of respondents are using AI tools at work, more than half of whom are using AI that is not approved by their organizations, including in the social sector. As AI is increasingly used in high-impact ways such as writing and assessing grant proposals, how can we make sure that it is used ethically, minimizing the risk of harm?  

Understanding bias and harm

Modern AI tools are trained on vast amounts of mostly human-generated data. In many cases, the training data is curated from the internet, with all the prejudices and biases of the humans creating it. 

Even in cases where subject matter experts carefully manage data, biases can creep in. AI often amplifies those biases. Even large tech companies have had trouble with AI bias, such as Amazon’s controversial hiring AI. In the very early days of AI at Candid, our grants auto-coding system, which predicts Philanthropy Classification System (PCS) codes from the text, exhibited signs of social biases. For example, when presented with a grant description describing programs serving low-income populations, the system would often also predict that the grant was serving “People of African Descent.” Similarly, we saw evidence that the model was associating “People of Latin American descent” with “Incarcerated people.” These cases were attributed to some of the records in our training data, which our AI overemphasized during learning. We addressed these problems through data investigations and corrections, and enhancements to how we sample data. We continue to refine the data we use for training and evaluating our grants auto-coding system. 

The harmful impact of these early problems was clear. Our grants auto-coding system is used to code most of the grants data that goes into Candid products. Misclassifying sensitive population groups results in an incorrect understanding of the work being done in the sector, who is being served, and where grant dollars are still needed. This may perpetuate the cycle of underserved communities remaining underserved.  

Ethical use to reduce bias and harm

Bias and harm are defined by an individual use case. At Candid, AI for grant coding can potentially harm population groups if miscoded. Similarly, an AI tool that recommends potential funders to a nonprofit runs the risk of perpetuating historical biases in grantmaking. When we use AI to help draft a grant proposal—in a tool currently under development—we must consider the potential that AI-generated language could be harmful and offensive. Additionally, there is a risk that language AI systems can misunderstand special vocabulary or cultural language. 

Large language models such as ChatGPT can make up facts and present them as true. These “hallucinations” can be harmful, especially in use areas where manual fact-checking is not possible. In this case, users can reduce harm by including facts and data in the prompt they give, providing guardrails on the AI response. 

After you define bias and harm for your use, the next step is to establish standard evaluation procedures to test AI for these definitions. These evaluations can provide usage restrictions or even mark an AI tool as inappropriate to use. At an organizational level, evaluations can help to establish ethics policies for what AI can be used for and what appropriate use looks like. Candid has established AI policies for staff, one of which is that AI outputs must be verified for truth and accuracy.  

Guidelines for building AI tools for good

Building ethical AI requires an interrogation of the data you plan to use for training and evaluation. What problem is your AI trying to solve, and what is the definition of bias and harm for that problem? What can bias look like in the data? Answering these questions can guide efforts to correct and adjust the data. 

One method being used to better understand potential problems and limitations of an AI system is model cards. Model cards were introduced to make reporting on AI more systematic, including what data was used to train and evaluate the AI, limitations of the AI, and appropriate usage guidelines. Candid produces model cards for all AI tools we build and deploy in our systems. Currently our internal model cards are used to guide data quality considerations and planning improvements. We maintain a historical record of our model cards to track the AI’s evolution over time. 

When using publicly available AI tools—either as a building block in a larger system or in and of itself—it is important to look for a model card so that you can better understand its limitations. Hugging Face, the largest open-source AI repository, provides guidance on reading and creating model cards

Finally, once your AI is built and trained, a critical step before launching it for wider use is to test the AI and provide ethical usage guidelines in a model card. In the case of Candid’s grant-coding AI, we have established that providing very short text produces unreliable results, and so one of the guidelines on the model card is a limitation on the minimum length of text to be provided.  

Ethical decision making

Easy access to powerful AI means that we all need to be careful consumers and developers of the technology. Understanding the limits of a given AI is crucial in defining ethical use.  

Anyone considering how to use AI ethically should reflect on their work: what constitutes harm, and how AI can help. For those building AI tools, keeping harm reduction in mind at every stage of the development process is critical. This reflection should generate criteria by which to establish ethical guidelines. 

Photo credit: Drazen Zigic via Getty Images

Tags:

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.