Can nonprofits leverage AI to understand community feedback?
Find out how to use both generative AI and human experts to surface actionable insights on nonprofit work and community needs based on our research team’s data analysis with AI experiment.

To better understand community needs or evaluate program effectiveness, many nonprofits gather information from stakeholders through interviews or focus groups. This data can be informative, but it’s also time-consuming and labor-intensive to interpret. Could generative AI tools help nonprofits understand their data? The short answer is: Yes, but. Analyzing nonprofit data also requires people’s understanding of communities’ context and issues, gained over years of experience.
When thoughtfully implemented, a human-first AI strategy can help nonprofits understand their communities’ input quickly without sacrificing depth. At Candid, we experimented with using both AI (a large language model) and human experts (nonprofit researchers at Candid) to analyze 24 interviews with nonprofit leaders. Here’s what we learned:
AI and human experts identified similar patterns in the data
AI and human experts found remarkably similar results when analyzing the same interviews. Both identified comparable themes and highlighted similar quotes that exemplified those patterns. They also found similar counts (the number of interviews that mentioned a theme). Across 11 interview questions, their counts never differed by more than one for any theme. Both also identified the same unique experiences (e.g., relevant but less common responses). This alignment gave us greater confidence that our final findings accurately reflected nonprofit leaders’ experiences.
AI and human reviewers each brought strengths to data analysis
Although AI and human experts produced similar findings, each showed unique advantages. AI processed 24 hours of interview transcripts within minutes. It consistently applied the same approach across all responses, highlighted patterns across different questions, and quickly pulled relevant quotes.
Human reviewers, however, were able to identify important cultural context and nuance. They recognized when participants used coded language or euphemisms that current AI tools may not have the experience to detect. They also identified critical intersectional experiences. While AI identified themes related to gender in several responses, our researchers highlighted how gender-related experiences varied across leaders from different racial/ethnic backgrounds.
How to incorporate AI into data analysis
Our experience suggests that combining coding by AI and human experts can produce more efficient and comprehensive insights into open-ended data than either approach alone. Here’s how to get started:
1. Identify your reviewers. Decide which AI tool (e.g., Claude, ChatGPT) and human experts (e.g., staff members, community liaisons) could best analyze your data. For human reviewers, consider why you collected your information and what knowledge or skills are needed to analyze your data. When selecting an AI tool, consult your organization’s policies (especially your AI policy). Then, consider AI tools’ data security policies (some platforms store or train on your content) and ability to incorporate supporting documents to provide additional context. We used Claude because it can handle extensive transcripts, remember context across conversations, and incorporate supporting documents about our study.
2. Write clear instructions for analysis. This is critical when working with generative AI tools, as prompt engineering (the information you give AI when asking it to perform a task) has a substantial impact on the quality of your output. Think of this as writing instructions for someone who is unfamiliar with your data or context. We suggest creating separate instructions for each topic (e.g., question) to fully explore input and surface important issues.
Provide AI and human experts with the same data to analyze independently, ensuring comparable results from both groups. Provide information about:
- Role: The viewpoint your reviewers should adopt to approach their task.
- Context: Details about the purpose of your work and general information about the individuals you collected information from.
- Question: The initial topic discussed or question asked, so reviewers can identify related themes that are relevant.
- Output: Detailed description of your expected final product. This could include identifying themes, frequency counts, and representative quotes; writing a summary; or comparing results to other data. Specify your preferred format, along with quality standards and ethical guidelines to follow.
Elements of effective instructions for AI and human reviewers
| Role | Context | Question | Output |
| Who should your reviewers be? | What do reviewers need to know? | What data will reviewers analyze? | How should findings be shared? |
| Example: | Example: | Example: | Example: |
| You are a qualitative researcher with expertise in nonprofit funding and leadership. | We collected data on pandemic funding from 30 CEOs at mid-sized nonprofits. | Participants were asked: How did you adapt to funding changes in 2020? | Identify major themes, quotes, and counts, then provide a summary in 1-2 paragraphs. |
3. Compare and combine insights. Have a human reviewer compare outputs from both AI and human experts, then combine their outputs into a final summary. Consider:
- Areas of convergence: Where AI and human experts agreed, increasing confidence in findings.
- Complementary insights: Where AI and humans identified different but valid patterns.
- Discrepancies: Where AI and humans disagreed, requiring further investigation, additional context, or expert judgment.
Considerations for using AI with nonprofit data
When incorporating AI into data analysis, keep these safety measures in mind:
- Protect participant privacy. Remove identifying information from your data when using online AI platforms and be mindful of where sensitive information is stored.
- Address potential AI limitations. AI systems can miss important cultural context or perpetuate existing biases present in their training data. When analyzing feedback from diverse communities, human experts should look out for potential biases.
- Maintain human oversight. While AI can quickly identify patterns, people should determine whether those patterns represent real community experiences and needs.
The goal of this approach is not to replace human insight but to combine the strengths of AI’s efficiency and humans understanding to better understand and respond to community feedback. We’ve learned that this combination can help nonprofits process data efficiently without sacrificing the nuanced understanding that effective community engagement requires.
About the authors
