Nonprofits are building AI solutions to advance their mission—and ensure equity
Delve into data from Fast Forward’s 2025 AI for Humanity Report to learn about the nonprofits that are building AI solutions to responsibly and equitably advance their missions.

Fast Forward’s 2025 AI for Humanity Report analyzes data from 34 AI-powered nonprofits (organizations building AI solutions to advance their own mission-driven work, e.g., chatbots, recommendation engines, or systems that organize resources for end users), 83 AI-assisted nonprofits (organizations adopting AI tools to improve workflow efficiency), and another 73 nonprofits identifying as both AI powered and AI assisted.
Candid insights asked Fast Forward co-founders Shannon Farley and Kevin Barenblat, as well as Atif Javed, co-founder and CEO of Tarjimly, a nonprofit building AI solutions, for their thoughts on a few key findings.
Many nonprofits building AI solutions are small and new to the field
The survey found 48% of nonprofits building AI solutions to advance their missions employ 10 or fewer people and 30% have budgets of $500,000 or less. And two-thirds are relatively new to it: 12% have been developing AI solutions for less than six months, 28% for between six months and a year, and 28% for between one and two years.
“It isn’t a surprise that the smallest, nimblest nonprofits are leading the way on AI,” said Barenblat. “Nonprofits have always looked for ways to do more with less. Tech nonprofits are the archetypal example of nonprofits who’ve unlocked how.”
48% of nonprofits building AI solutions cite costs as biggest challenge
Nearly half (48%) of nonprofits developing AI solutions cite initial costs—sourcing and cleaning data, developing benchmarks, fine-tuning models, and running experiments—as the biggest challenge in adopting AI. Eighty-four percent say what they need most to continue developing and scaling their tools is additional funding. It’s a catch-22: They need capital to prove impact but need proven impact to unlock capital.
“It costs money to deploy the technology responsibly, and it takes time for impact to follow,” Farley noted. In a whitepaper co-authored with Google.org, Farley and Barenblat argue for strategic risk capital: Grantmakers can provide pre-seed grants; sustain mid‐stage growth with risk‐tolerant capital for infrastructure, technical talent, and product iteration; and invest in scaling proven models.
Funders are starting to adopt this approach, according to Farley. The whitepaper highlights how Bloomberg’s Corporate Philanthropy team supports Visilant, which helps health workers use smartphone images to detect eye disease in low-resource settings, with a blend of financial and human capital.
Nonprofits adopting existing AI tools lag in responsible AI practices
Among nonprofits using existing AI tools such as ChatGPT to improve efficiency, the smallest (with budgets under $100,000) report an average of 82% of employees using AI tools regularly, compared with 67% at midsize nonprofits (between $100,000 and $5 million,) and 59% at larger ones.
However, nonprofits adopting existing AI tools are less likely than those building their own AI solutions to have policies for responsible AI use (35% vs. 69%). They were also less likely to have risk mitigation processes (39% vs. 75%) and privacy controls (44% vs. 69%). “Part of the gap is perception: back-office uses like content creation and marketing may feel ‘low risk,’” the report notes.
“Ultimately, AI outputs are only as good as the data and practices behind them,” said Barenblat. “Showing commitment to data privacy, security, and ethical development builds trust and transparency.” This includes curating datasets that reflect and enhance the lived experience of users; conducting bias testing; and soliciting feedback directly from the community.
“Our biggest worry isn’t data security; it’s inequity,” said Javed, whose nonprofit provides translation services to refugees. “At Tarjimly, our greatest concern is that nonprofits using AI could unintentionally deepen inequities, especially for Limited English-Proficient people, refugees, immigrants, and speakers of low-resource languages who are often invisible in data.”
Community feedback is essential to ensuring equity in AI tools
Seventy percent of nonprofits building AI solutions as part of their mission-driven work report regularly incorporating community feedback into system updates. What does “community feedback” look like? Farley said Fast Forward has seen nonprofits adopt two approaches: ecosystem feedback, involving community partners early in the design process who can bring deep context and surface blind spots; and in-product feedback, inviting users to flag or rate AI-generated responses in real time, so the nonprofit can fine tune the model, improving the product for everyone.
Sixty-one percent also customize models with their own data to tailor their tools to the communities they serve. “At Tarjimly, incorporating community feedback means co-creating technology with the very people it serves,” Javed explained. “Indigenous and community translators review and label translations and interpretations, creating high-quality datasets that are then used to fine-tune our models. This human feedback loop ensures that our AI evolves with cultural nuance, linguistic accuracy, and trust from the ground up…they’re the driving force shaping the next generation of humanitarian AI.”
Barenblat noted that funders can help ensure nonprofits have the resources to prioritize AI equity and accountability. “In grantmaking, funders should encourage transparent policies, support ethics training, and offer governance templates, [signaling] that equity and accountability are as important as innovation.”
Photo credit: Courtesy of Tarjimly
About the authors
