Search blog

Beyond efficiency: A human-first AI adoption strategy

A group of people working together in an office setting.

When we first started researching the use of AI in nonprofit organizations, we described the key benefit of AI as the “dividend of time.” As we wrote in The Smart Nonprofit: Staying Human-Centered in an Age of AI, when used responsibly, AI can help nonprofit staff reduce the countless hours they spend doing labor-intensive and often energy-draining tasks—so they can focus instead on mission-critical work. With many nonprofit workplaces taking up generative AI over the past year, what costs and benefits are staff seeing?

No longer an abstract concept, many nonprofit staff are experiencing the “dividend of time” as they adopt tools such as Copilot and ChatGPT for scheduling and managing meetings, processing email, writing drafts, summarizing long reports, analyzing data, and creating efficiencies in other common workflows. 

Researchers have measured the time savings created by generative AI. A recent Harvard Business School study compared two groups of consultants—one using AI tools and the other not—performing the same workplace tasks such as writing, analysis, and strategy. The group that used AI tools on average completed 12% more tasks, 25% faster, with 40% higher quality. Other studies have found similar results, including this study from MIT that found that  generative AI tools substantially increased average productivity for professional writing tasks. 

The human risks of over-reliance on AI tools

There is a risk, however, that the availability of low-cost AI tools and the productivity gains they provide could lead to an over-reliance on the technology. Nonprofit leaders could use them to have staff do the same tasks they’ve always done, just faster, or shift jobs from full-time work to part-time or contract work. Yet using AI adoption primarily to maximize time and cost efficiency will only exacerbate the burnout crisis in the sector. 

One area with a potential high risk of over-reliance on AI tools is internal collaboration and communication. We’re beginning to hear about how AI tools could erode workplace relationships if used only to speed up work. Increasing staff interactions with AI tools could reduce human interactions internally. This will be especially detrimental for junior staff, who need human-to-human feedback to learn and improve their job performance. 

If staff interact more with AI tools than with one another to complete tasks, it reduces the well-documented benefits of informal conversations of building an effective workplace culture, onboarding new staff, and improving staff retention. We know from workplace research that having a strong sense of community and connection with workplace peers has many benefits to well-being, including mental health.

Putting humans first

There is an alternative approach: a human-first AI adoption strategy. Here are the core elements:

Redefining “productivity.” Centering humans begins with expanding the definition of productivity—as the ability to both complete existing tasks (or learn new tasks) faster and better and reallocate time from transactional activities to activities that deepen relationships. There is a balance to be struck internally—in which humans and AI each do what they do best, with humans always in charge of the final product or action. This is called “co-botting” or “co-piloting” with AI. 

The most important skill nonprofit leaders can learn and teach their teams is to know when to use AI and when to use human skills. This process begins with learning technical skills such as  “prompting,” which involves constructing the right questions to ask generative AI tools in order to get useful answers. It also involves learning when and how to use AI responsibly, based on acceptable use guidelines. More importantly, increasing human-to-human engagement must be a part of any AI adoption strategy. 

Disrupting job tasks, not eliminating jobs. During the recent Microsoft for Nonprofits “Global Nonprofit Leaders Summit,” Meg Garlinghouse, LinkedIn’s head of social impact, and Karin Kimbrough, chief economist,  shared some striking data from the LinkedIn Economic Graph about how the technology will specifically impact jobs in the nonprofit workplace: 12% of nonprofit job tasks will be “augmented or changed,” and 39% of nonprofit job tasks will be “disrupted.” 

Disrupting job tasks is not the same as eliminating jobs. It means that the job description and tasks might shift and require using or learning different skills. For example, a fundraiser whose job is “disrupted” may spend much less time on desk prospect research and have more time to talk to donors and other cultivation strategies. 

Investing in “soft skills.” A human-first AI adoption strategy also requires an investment in “soft skills.” Leaders are going to need a talent development strategy for upskilling employees. In other words, if using AI tools can help save an employee 10 hours per week, this is an opportunity to provide training to do unique tasks that require human empathy and intelligence, such as giving and receiving feedback, mentoring for improved performance, listening and reflection, and building psychological safety. Prioritizing and fostering human skills among staff will not only help the organization become more human-centered, it will help alleviate the anxiety AI adoption can cause staff.

LinkedIn data suggests that the most in-demand skills for nonprofits that adopt AI will be person-to-person communication, continuous learning and adaptability, and problem-solving skills. While generative AI tools can do many things, they do not yet possess human reasoning or empathy. In other words, working effectively with AI tools requires more human-centered and -centering skills, not less.

The use of AI in the nonprofit workplace will create deep and profound changes. Nonprofit leaders must dig into the technology now, understand what it is and how it works, how it will change job tasks, and, most importantly, how to ensure that people always come first. Leading with a human-first approach to encourage the use of AI in a way that improves workplace relationships will create a renewed sense of purpose, and ultimately more impact. 

Photo credit: gorodenkoff via Getty Images

Tags:

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Kate, Digital Communications Manager, Candid says:

    March 28, 2024 10:53 am

    Thanks for the suggestion! We will share it with the team.

  • Jeannine 'Jacleyn' Roberts says:

    March 22, 2024 10:36 am

    Thank you for this article. As a writer who prides herself on her soft skills, I am wanting to know how to incorporate AI in my creative process without disrupting my flow--"Maxwell's Demon." Would Candid consider forming a workshop on how to do this in non profit proposal writing? I would attend...

  • Ed Miller says:

    March 15, 2024 6:17 pm

    Thanks. This was informative.

  • Navjeet Singh says:

    March 1, 2024 11:31 am

    Beth and Allison,
    Thank you for highlighting some of the challenges of AI in the nonprofit world. My concern is that addressing these challenges will be tricky as it already tends to be an area of weakness especially for smaller nonprofits--learning and development. Nonprofits, especially in areas such as workforce development are not exactly well known for investing in professional development. The onus tends to be on the individual employee.

  • Art Taylor says:

    February 21, 2024 12:25 pm

    Allison and Beth - Thanks for sharing these insights. There will be lots of pressure on organizations to use AI primarily as a cost saving measure, which raises ethical issues. Centering AI on humans keeps ethical use in the forefront and cost savings as a by-product.