AI and Nonprofits: Frequently Asked Questions (FAQ)
Practical insights, ethical considerations, and high-impact use cases
As nonprofits face increasing demand and new challenges, the use of Artificial Intelligence (AI) offers opportunities to increase efficiency, inform strategy, and build capacity. Many nonprofits have staff who are stretched very thin, and AI tools can provide administrative relief so staff can spend more time on human connection, creativity, and care. However, adopting AI also comes with its own set of challenges, from ethical considerations to practical implementation hurdles. AI use is not risk-free, and requires intention, training, oversight, and transparency.
Our MANP team is learning and experimenting (and worrying) right alongside our members! At our May MANP Connects, we had a great conversation about the potentials and pitfalls of AI use, and in follow up, we’ve compiled resources on some of the questions we’re hearing from our network and exploring ourselves.
Frequently Asked Questions (FAQs)
Develop an AI Use Policy and provide staff training on ethical AI use (samples below)
Our #1 Resource
NTEN’s AI for Nonprofits Hub
What's the best way for someone new to AI to start exploring?
Start small with low-risk tools. Experiment with AI assistants like Microsoft’s Copilot or Google’s Gemini that are built into programs you are likely using already. These tools can be used to summarize documents, draft emails, or brainstorm ideas. (Learn the basics of prompt engineering to get better outputs.)
AI note-takers like Fathom, Otter.ai, or Zoom’s built in AI assistant can help ensure the humans in the meeting can fully engage, though you’ll want to review and fine tune notes afterward to balance AI with human input.
Suggested Reading & Resources:
- Video: Ten Terms to Get Started with AI in the AI for Nonprofits Resource Hub (NTEN)
- Free Online Course: Elements of AI
- Free Recorded Webinars: Intro to AI and AI Experts Q&A Panel (RoundTable Technology)
- Automation Tool - Zapier Learn
- Co-Pilot Adoption – Adoption Manager (Microsoft)
Do you have recommendations for workplace AI policies?
Yes: you need one! Your team is likely experimenting with AI already, even if it’s just in small ways, and it is essential to develop, and regularly revisit, AI acceptable use policies that define scope, privacy, transparency, and accountability boundaries that make sense for your mission and values. Some tools store data to improve models, though private or enterprise settings sometimes provide a way to limit how data is retained and used. Organizations may want to urge or even require their staff to use AI only through team/enterprise accounts that allow the organization to set limits and safeguards on how data is used. Regardless, the top line is: Do not share sensitive personal or organizational data in public-facing AI tools.
Luckily, there are some great resources to help.
Suggested Reading & Resources:
- Video/Articles/Samples: AI for Nonprofits Resource Hub (NTEN)
- Article + Sample: Should You Have an AI Policy? (RoundTable Technology)
- Article: How Nonprofits Can Create Ethical AI Policies (Nonprofit Quarterly)
- Article: How to Develop an Ethical AI Use Policy for a Nonprofit (ICFJ)
- Tool: Nonprofit AI Policy Builder (FastForward)
- Guide: Generative AI in a HIPAA Regulated Environment: A Guide for Non-Profits (VIP of IT at Easterseals NJ)
How can we balance using AI to find efficiencies with being energy conscious?
AI requires substantial energy, particularly for “training” large AI models. Look for tasks where AI saves substantial time such as summarizing reports, writing first (not final!) drafts, or organizing data.
Suggested Reading & Resources:
- Article: Explained: Generative AI’s environmental impact (MIT)
- Article: A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint (Scientific American)
- Article: AI’s environmental impact can’t be ignored. What can higher ed do? (EAB)
- Article: Will AI Help or Hurt Environmental Efforts? (Nonprofit Quarterly)
Remember - AI is not the only way to leverage technology for administrative efficiencies. You may want to explore automation tools like Zapier, Calendly, or even Excel macros!
Can’t AI reinforce bias and/or inequity?
Yes. AI reflects the data it’s trained on—if that data is biased (and it almost certainly is) outputs will be, too. Training for staff on how to mitigate bias when using AI tools is a must.
Suggested Reading & Resources:
- Framework: Artificial Intelligence Framework for an Equitable World (NTEN)
- Guide/Tool: Equitable AI Project Planning Guide (PDF) (NTEN)
- Information + Advocacy Resources: Algorithmic Justice League - AJL’s mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
- Free Recorded Webinar: Data Ethics in the Age of AI (RoundTable Technology)
- Research: AI Risk Repository (MIT)
What frameworks help evaluate the potential risks and return on investment of using AI tools?
Use a tech risk matrix to assess potential harms vs benefits. Consider mission alignment, energy impact, community needs, and staff capacity. Tools like AI risk management guides help frame these questions. Common mistakes include rushing adoption without a strategy, neglecting staff training (especially around ethical, privacy and security considerations), or using tech that doesn’t align with mission goals.
Suggested Reading & Resources:
- Article: A Step-by-Step Framework to Mitigate AI Risk (Nonprofit Risk Management Center)
- Article: 8 Steps Nonprofits Can Take to Adopt AI Responsibly (Stanford Social Innovation Review)
- Article: Artificial Intelligence: Productivity Benefits and Risks (Systems Engineering)
- Article: Artificial Intelligence: Balancing AI Adoption and Cybersecurity (Systems Engineering)
- Tool: Equitable AI Project Planning Guide (PDF) (NTEN)
- Report/Recommendations: AI Compass for Nonprofits (by Microsoft and AI4SP)
- Article: Do's and don'ts for chatbots and similar AI tools (NTEN)
- Toolkit: AI Ethics for Nonprofits (NetHope)
- Podcast: How can your organization stay human-centered in an automated world? (Big Duck, featuring Beth Kanter and Allison Fine)
Examples + Inspiration
- Research: Mapping the Landscape of AI-Powered Nonprofits (Stanford Social Innovation Review)
- Article: How AI has become My Most Candid Colleague (Joshua Peskay)
- Podcast: The Smart Nonprofit: Staying Human-Centered in An Automated World (Beth Kanter and Allison Fine)
- Free Recorded Webinar: AI Do-Fest: Transforming Ideas into Reality with AI (RoundTable Technology)
- Examples: Climate Change AI – A global initiative using AI to tackle climate issues, with research and educational resources
- Case Study/Video: Predictive Tools to Protect Maine’s Woods and Waters (The State of AI in Maine Event, Roux Institute)
- Article: 5 Nonprofit Fundraising AI Application Examples (CCS Fundraising)
- Resource: FundraisingAI - an independent collaborative that exists to understand and promote the development and use of Responsible and Beneficial Artificial Intelligence for Nonprofit Fundraising.
Where Can I Get HELP?
-
From Your Peers: NTEN has an AI online discussion group and answers community questions.
-
From Technology Experts: RoundTable Technology and Systems Engineering are two examples of organizations working on AI issues with Maine nonprofits, and you can find additional technology consultants in our Business Finder directory who can help you identify ways technology can help you find administrative efficiencies, whether through AI or other tools.