Back to human: Using AI responsibly to strengthen trust in nonprofits
Responsibly using AI in the nonprofit sector means prioritizing—not automating—human authenticity and connection that build trust with community members and supporters.

Can you tell when a donor email was written by a person or by a bot? Have you read a social post that felt just a little too polished to be real?
For those of us in the nonprofit sector, these questions strike at the heart of our work: trust.
Trust is what holds together our relationships with donors, volunteers, beneficiaries, and staff. But those foundational relationships are shifting as AI becomes woven into nearly every digital interaction. Many nonprofit leaders I talk with are trying to understand how to adopt AI responsibly—without losing the human authenticity their communities expect from them.
The Edelman Trust Barometer recently reported a sharp decline in public trust in institutions, and the use of AI has exacerbated that trend. The AI Equity Project 2025, led by Meena Das and Michelle Flores Vryn, reflects the same tension within the nonprofit sector: While 65% of nonprofits express curiosity about AI, only 9% feel ready to use it responsibly. That gap reveals something deeper than a tech readiness issue: a crisis of confidence in how we balance innovation with integrity so we continue to build trust.
Donors give because they believe in the stories we tell. Communities engage because they trust our mission. But with AI-generated language, visuals, and data shaping how we communicate, we risk losing the very thing that makes nonprofit work special: the human connection behind every message.
At the same time, the rise of AI could also remind us what truly matters. In a future world of automated content and algorithmic efficiency where empathy can be simulated, the most valuable currency will be human authenticity.
Responsible AI use requires funding, infrastructure
Every week, I hear the same questions from nonprofit peers: How can AI help us save time? Could it reduce burnout for small teams? And most importantly, how do we make sure it aligns with our values of equity and trust?
The AI Equity Project found only 6.9% of surveyed nonprofits had internal policies for responsible AI use, and among organizations that were familiar with data equity, the number of those implementing data equity practices dropped significantly in 2025. Many small and midsize organizations, particularly those outside North America, see AI as a way to bridge capacity gaps, but without access to equitable funding or infrastructure, that potential remains out of reach.
Recent analyses from the Center for Effective Philanthropy and NTEN echo these findings. While AI can streamline data analysis and reporting, most organizations face barriers like limited expertise, unclear ethical guidance, and tight budgets.
The capacity to ensure responsible AI use is critical to nonprofits. When a corporation misuses AI, it risks reputation; when a nonprofit does, it risks the trust of the communities it serves.
I find it encouraging that nonprofits are approaching AI thoughtfully. The opportunity now is to leverage the opportunity AI presents to build shared learning spaces and ethical frameworks that make responsible innovation possible.
Back to human
AI can help us work faster. It can summarize meetings, analyze data, and even draft donor communications. But it cannot replicate the essence of nonprofit work—our ability to connect, empathize, and build belonging.
Through The Nonprofit Hive, I’ve seen how a simple 30-minute conversation can create something deeper than a professional connection. With no agenda and no algorithm shaping who speaks, people settle into real presence—showing up as they are, not as a polished version of themselves. That presence is what builds trust. It’s something AI can simulate in language but never truly replicate in lived experience.
When nonprofits use it with care, AI can actually make more of that human authenticity possible. It can free up time for staff to focus on storytelling, relationship building, and listening—and actually do the work of investing in human relationships and belonging.
Designing human-centered AI
If nonprofits are to continue building trust through human authenticity, the goal shouldn’t be faster AI adoption; it should be better alignment. Here’s what I believe that looks like:
Funders: Invest in readiness, not just innovation. Support training, data ethics, and equitable access to tools before expecting rapid adoption.
Nonprofit leaders: Create space for dialogue. Invite your teams to talk openly about how AI might affect your mission and your relationships. Transparency builds confidence.
Tech partners: Co-create with nonprofits, not for them. Collaboration ensures tools reflect the realities of limited budgets and diverse communities.
All nonprofit professionals: Pair automation with intention. Use AI to simplify administrative work (thank-you notes, translations, onboarding) so people can focus on empathy and connection. In other words, we can leverage AI to extend our human authenticity, not to imitate it.
And the nonprofit sector can take this opportunity to model a new kind of leadership, one that measures progress not only in efficiency but in empathy. Because what sustains our missions has never been code or content.
It has always been community, rooted in building trust, human connection, and belonging.
Photo credit: skynesher/Getty Images
About the authors
