Skip to main content

Comprehensive nonprofit and foundation information is a search away

By registering or logging in, you get access to detailed profiles and a personalized dashboard.

Tips & Training

Federal government AI adoption: What nonprofits need to know

Learn why federal government AI adoption may pose challenging implications for nonprofits in the U.S. and what leaders can do now to prepare and protect their missions.

September 10, 2025 By Gayle Roberts

A row of workers looking at data on computer screens.

The federal government’s AI adoption accelerated dramatically in 2025, with major AI companies competing for government contracts and new frameworks reshaping how agencies operate. What does this mean for nonprofits and/or their clients? Government grant reviewers might use AI to analyze funding proposals. Policy staff might use it to draft regulations affecting public benefits. Case workers might use it to process Medicaid or TANF applications. Here’s a quick overview of recent developments, their implications, and what you can do to prepare.

AI companies compete to dominate government AI adoption

The administration’s America’s AI Action Plan, released in July 2025, outlined over 90 actions emphasizing rapid AI deployment by reducing or eliminating regulations at federal, state, and local levels. Central to this strategy: OpenAI secured a deal to provide the federal workforce access to ChatGPT Enterprise for $1 per agency per year. Anthropic responded by offering Claude access to all three branches of government also for $1.

These developments coincide with the launch of USAi.gov, a platform designed to streamline AI adoption across government. While potentially improving efficiency, this concentration raises questions about government dependency on vendors and lack of accountability.

Government AI adoption disproportionately affects vulnerable populations

Public benefits recipients. Tennessee’s $400 million TennCare Connect system wrongfully denied Medicaid benefits to thousands of residents due to programming and data errors, a federal judge ruled in August 2024. While the lawsuit brought by Tennessee Justice Center—which helps Tennesseans navigate health care and nutrition public benefits programs—and other nonprofits protected affected families, it required years of legal resources that could have supported direct services.

Immigrants. Immigration and Customs Enforcement (ICE) has expanded AI-powered social media monitoring through programs that scan online content for enforcement purposes. While ICE maintains these tools support legitimate enforcement, civil rights advocates express concern about potential overreach affecting legal residents and citizens.

People of color targeted for investigation. The Department of Homeland Security continues to expand biometric data collection, using technologies that raise privacy concerns. A Government Accountability Office study also found that federal agencies need to track employees’ use of facial recognition technologies and better assess their accuracy and privacy risks. Given documented performance disparities across demographic groups in facial recognition systems, the expanded use of biometric data collection puts Black migrants and other people of color at higher risk of discrimination by border police and other agencies.

Nonprofits can build a strategic response to the impacts of government AI adoption

Here’s a systematic approach nonprofit leaders can take to assess and respond to expanded government AI adoption and protect their missions and client interests. I call it the S.A.F.E. framework:

1. Scan for systems and impact. Map AI systems at federal agencies that affect your organization through existing government relationships and public information requests—for example, through the Freedom of Information Act. Document current AI applications in grants review processes, compliance monitoring, and service delivery systems that interact with your organization. Understanding which systems affect your work enables you to detect potential issues early and conduct informed advocacy efforts.

2. Affirm community-centered policies. Develop organizational AI policies through inclusive consultation processes that engage community voices, not just board governance. Establish clear principles for engaging with AI-enabled government systems, including requirements for transparency and human review in decisions affecting vulnerable populations. Consider building these principles into funding agreements and memorandums of understanding with government partners.

3. Field-test for potential harm. Create internal systems to identify potential AI-related issues impacting clients early. Train staff to recognize patterns in application denials, service delays, or other outcomes that might indicate systemic problems requiring an organizational response. Develop partnerships with legal aid organizations and advocacy groups to deliver coordinated responses to identified issues.

4. Elevate through coalition and advocacy. Participate in coalitions focused on algorithmic accountability and transparent AI governance. Share documentation of concerning patterns with civil rights organizations, oversight bodies, and investigative journalists. Support policy measures that ensure that the voices of community members are included in decisions about AI governance and maintain accountability mechanisms for AI-enabled government services.

Balancing efficiency and accountability

The federal government’s AI adoption and expansion could reduce processing times, improve accessibility through language translation, and help agencies manage increasing service demands with limited resources. However, realizing these benefits while maintaining accountability requires active engagement from civil society organizations. As crucial intermediaries between vulnerable communities and government systems, nonprofits can identify problems early and advocate for responsive solutions.

And organizations that proactively engage with these developments will be best positioned to protect their missions while contributing to effective AI governance that serves their communities.

Photo credit: gorodenkoff/Getty Images

About the authors

Headshot of Gayle Roberts.

Gayle Roberts

she/her

Co-founder & AI Strategist, AIxImpact; Founder, Fundraising for Change

View bio

Continue reading

View all insights