Tips for using AI technology to do good—safely and ethically
Ensure your nonprofit harnesses AI technology for good using these expert tips and free resources that help clarify common questions about using AI tools and provide a roadmap for responsible and ethical AI adoption that benefits all.

As generative AI tools like ChatGPT become increasingly widely available, the need to ensure they’re used safely and ethically also becomes ever more urgent. While AI technology can create efficiencies for cash-strapped nonprofits, we need to make sure the way we use AI is aligned with our missions to do good. Here are some articles to help you get started.
How should we think about using AI technology to do good?
How can AI tools transform the work nonprofits do for the better and help advance social causes? These articles explore the ethical considerations AI technology raises for the sector and ways to address them.
Harnessing generative AI for good. As a former tech executive of 20 years, Candid CEO Ann Mei Chang shares her unique perspective on the risks and opportunities generative AI technology poses for the social sector. She lays out three grounding principles for ensuring generative AI is used to help all people and our planet thrive: 1) fall in love with the problem, not the technology; 2) fast follow; and 3) keep equity front and center.
Using smart tech to reimagine nonprofit work. Allison Fine, co-author of The Smart Nonprofit: Staying Human-Centered in an Automated World, outlines how technologies like AI, machine learning, and natural language processes—which uses AI technology to analyze and interpret human language—can automate rote tasks, saving nonprofit staff time so they can focus on the mission-critical work only they can do. Fine suggests three ways nonprofits can use smart AI to shift work cultures: Invest in chatbots, improve workflow, and focus on physical health.
How should nonprofits mitigate risks in adopting AI technology?
Now that we have an idea of the potential for AI technology to improve nonprofit work, let’s get into the nitty-gritty of how an organization goes about adopting and/or developing AI tools.
Doing good with AI tools: Navigating ethical considerations for the social sector. Is your organization planning to develop an AI tool? Candid’s data science manager explains how AI technology is trained on previously created data and can amplify the biases of its human creators—with harmful effects. The article outlines steps for establishing an evaluation process for bias and harm, building an ethical AI tool, and testing and providing ethical usage guidelines before launch.
Is your nonprofit thinking about using ChatGPT? Your first step is to do no harm. In this article, co-authors Beth Kanter and Fine tell a cautionary tale of the limitations of chatbots. With mental health hotlines increasingly relying on chatbots, the risk of harm is real; chatbots can’t read nonverbal communication to gauge the pain a caller is in, offer true empathy, or identify the most useful resources. The authors call on nonprofits using ChatGPT to stay human-centered, increase staff’s AI literacy, consider “co-botting” with humans, and test, test, test.
Mitigating future risks: Why your organization should have an AI policy. Having an organization-wide AI policy can help maximize the benefits of using these tools while minimizing the risks. Candid’s contracts and compliance manager describes how we developed guidelines for AI use—which focuses on best practices, not specific tools—to mitigate risk factors, establish operational standards and procedures, and adapt to AI’s evolution.
How can we ensure AI technology actually benefits nonprofit staff?
The potential for AI-created efficiencies doesn’t have to mean staff cuts. These articles illustrate how we can implement inclusive AI adoption to support humans, not replace them.
Beyond efficiency: A human-first AI adoption strategy. In this article, Fine and Kanter warn against overuse of AI tools merely to drive efficiency and productivity, which can increase staff workloads and burnout, reduce human interaction, damage workplace culture, and lead to high turnover. The authors explore what “putting humans first” looks like, starting with redefining “productivity.”
Inclusive AI adoption to drive nonprofit missions. So, how does inclusive AI technology adoption (not the tools themselves) directly impact mission-driven work? An inclusive approach that values diverse perspectives, provides equitable access to AI tools and training, and considers the unique needs of different user groups can also help that organization address challenges such as reducing bias in service delivery models. The article outlines how nonprofits can implement inclusive AI adoption and how funders can support that effort.
Where can fundraisers and grant writers read more about using AI technology?
If you’re a fundraiser or grant writer looking for articles about using generative AI tools in their work, here are a couple of places to start:
Elevate your nonprofit grant writing success with expert tips. Here’s a roundup of articles with tips that combine innovative approaches with proven strategies, including how to use AI effectively to write compelling grant proposals.
The top 3 benefits of AI for fundraisers. In this 2020 article, Fine explains how AI can save staff time, increase online donor conversion and retention rates, and improve donor engagement and online support.
We hope these articles will help you make decisions about AI adoption—ensuring it’s being used to build a more equitable society.
About the authors
