AI and Nonprofits: Frequently Asked Questions (FAQ)

Posted By: Molly O'Connell Technology,

Practical insights, ethical considerations, and high-impact use cases

As nonprofits face increasing demand and new challenges, the use of Artificial Intelligence (AI) offers opportunities to increase efficiency, inform strategy, and build capacity. Many nonprofits have staff who are stretched very thin, and AI tools can provide administrative relief so staff can spend more time on human connection, creativity, and care. However, adopting AI also comes with its own set of challenges, from ethical considerations to practical implementation hurdles. AI use is not risk-free, and requires intention, training, oversight, and transparency.

Our MANP team is learning and experimenting (and worrying) right alongside our members! At our May MANP Connects, we had a great conversation about the potentials and pitfalls of AI use, and in follow up, we’ve compiled resources on some of the questions we’re hearing from our network and exploring ourselves. 

Frequently Asked Questions (FAQs)

Our #1 Tip
Develop an AI Use Policy and provide staff training on ethical AI use (samples below)

Our #1 Resource
NTEN’s AI for Nonprofits Hub
What's the best way for someone new to AI to start exploring?

Start small with low-risk tools. Experiment with AI assistants like Microsoft’s Copilot or Google’s Gemini that are built into programs you are likely using already. These tools can be used to summarize documents, draft emails, or brainstorm ideas. (Learn the basics of prompt engineering to get better outputs.)

AI note-takers like Fathom, Otter.ai, or Zoom’s built in AI assistant can help ensure the humans in the meeting can fully engage, though you’ll want to review and fine tune notes afterward to balance AI with human input.

Suggested Reading & Resources:

Do you have recommendations for workplace AI policies? 

Yes: you need one! Your team is likely experimenting with AI already, even if it’s just in small ways, and it is essential to develop, and regularly revisit, AI acceptable use policies that define scope, privacy, transparency, and accountability boundaries that make sense for your mission and values. Some tools store data to improve models, though private or enterprise settings sometimes provide a way to limit how data is retained and used. Organizations may want to urge or even require their staff to use AI only through team/enterprise accounts that allow the organization to set limits and safeguards on how data is used. Regardless, the top line is: Do not share sensitive personal or organizational data in public-facing AI tools.

Luckily, there are some great resources to help.

Suggested Reading & Resources:

How can we balance using AI to find efficiencies with being energy conscious?

AI requires substantial energy, particularly for “training” large AI models.  Look for tasks where AI saves substantial time such as summarizing reports, writing first (not final!) drafts, or organizing data. 

Suggested Reading & Resources:

Remember - AI is not the only way to leverage technology for administrative efficiencies. You may want to explore automation tools like Zapier, Calendly, or even Excel macros!

Can’t AI reinforce bias and/or inequity?

Yes. AI reflects the data it’s trained on—if that data is biased (and it almost certainly is) outputs will be, too. Training for staff on how to mitigate bias when using AI tools is a must.

Suggested Reading & Resources:

What frameworks help evaluate the potential risks and return on investment of using AI tools?

Use a tech risk matrix to assess potential harms vs benefits. Consider mission alignment, energy impact, community needs, and staff capacity. Tools like AI risk management guides help frame these questions. Common mistakes include rushing adoption without a strategy, neglecting staff training (especially around ethical, privacy and security considerations), or using tech that doesn’t align with mission goals.

Suggested Reading & Resources:

Examples + Inspiration

Where Can I Get HELP?