Program evaluation basics every nonprofit can use
Learn nonprofit program evaluation basics, including defining success, setting outcomes, and creating a learning cadence to measure and improve impact.

Now more than ever, nonprofits are under pressure to understand where their resources can be best used and document their impact. Regardless of the organization’s evaluation budget or measurement and learning capacity, all nonprofits can use basic evaluation concepts to support their mission and goals.
In my experience as founder and CEO of Social Insights Research, I’ve found that most effective nonprofit program evaluation processes tend to share the same core components:
1. A shared definition of success
Despite being clear about the problem they want to solve, team members may hold various assumptions about what drives change, what program participants need, and what meaningful progress looks like. Additionally, participants and communities themselves hold different (and often unexplored) perspectives.
A program evaluation clearly documents what’s working and what isn’t. That understanding starts with a shared definition of success. When assumptions are surfaced and aligned, the evaluation process is more likely to collect the information that will most effectively assess the program and improve it for those whom it’s meant to impact.
A practical starting point is holding conversations with staff members and community stakeholders to align on a shared understanding of what success would look like in real terms and what would count as evidence of that success. Ultimately, everyone involved should be able to articulate something similar to: “This program exists [for this purpose], and we know it is a success when [these things happen].”
2. Clear outcomes and metrics
Outcomes and metrics are how your definition of success is operationalized. It’s easier to design your strategy and understand your mistakes and innovations when you have something concrete to measure against. For example, if success means improving community narratives about midwives, then the strategy has to directly support those outcomes. You could provide local shops with handouts and invest in promoting the practice on social media. You likely wouldn’t invest in midwife trainings or interview doctors.
It’s important to understand the types of outcomes you’re aiming to create, because they directly inform how you measure progress toward each goal. Outcomes can be short, mid, or long term and may change over time. A useful way to categorize outcomes is by thinking about results that can be assessed through numbers versus those that require nuance. For example, the hoped-for outcome “Conduct successful ‘get out the vote’ efforts in 2026” could be measured by the numerical metric “Volunteers will register 250 people to vote by September.”
Outcomes that require nuance may begin with numbers but require additional context to determine progress. For example, hoped-for outcomes such as “Improved relationships between the 18 organizations working to end economic inequality in Atlanta” require mixed methods to evaluate—including both quantitative metric like “How would you rate your closeness to other organizations before and after the fellowship?” and qualitative ones like “Tell us about the relationships you developed during this fellowship.”
Either type of outcome, numerical or nuanced, can work when clearly defined and aligned with the overall strategy. The key is to take the time to think through how an outcome will show up in the real world and how you will recognize it when it happens.
3. A learning cadence
Having a set cadence for evaluation allows nonprofit staff opportunities to learn what’s working and what’s not as they implement not only the program but also the evaluation process. These planned moments for reflection help organizations make sense of the results of their work, while change is still possible if they need to adjust. From what I’ve observed, many initiatives schedule a beginning, middle, and end check-in. Multiyear strategies typically use quarterly or biannual reviews.
Earlier learning reduces the risk of nonprofits spending years operating on assumptions that have not been properly tested. This cadence is not just for program evaluation, but also for communication. Including feedback loops and consistent habits of reflection and debriefing as a part of your evaluation plan not only gives you more opportunities to improve, but also nurtures team engagement and equitable practice. Decide what will be shared, with whom, and how feedback will be incorporated along the way.
At Social Insights Research, we worked with a policy organization in the Northwest, which shifted its governance model to align with a core value: those most impacted by a problem are best equipped to make a difference. With funding to cultivate parent leaders to drive policy change efforts in local schools, the team defined 19 outcomes, mixing straightforward targets like recruiting 20 new parent leaders with more complex outcomes tied to engagement. They worked through their assumptions about these outcomes and revisited their definitions of success throughout the year through surveys, interviews, and focus groups. Our final evaluation found strong evidence of progress on 13 outcomes and surfaced unintended gains such as increased pride in the work among parents. By having a shared and measurable definition of success, paired with clear outcomes and indicators, the organization built a strong foundation to tell the story of its mission.
There is no perfect approach to program evaluation, but the more effectively you can articulate the change you want to make and how you’ll measure your progress toward it, the better you’ll be able to serve the communities you care about.
Photo credit: Fernando Lopez
About the authors
