What if my organization has no AI use policy?
Follow these steps to develop a personal AI use policy that is practical, responsible, and aligned with your organization’s values.

With approximately 80% of nonprofits still lacking an AI use policy, it’s likely many nonprofit staff are using AI tools without organizational guidance. This presents not only a security risk but a loss of collaborative learning and a missed opportunity to build alignment around responsible AI use. So, what should you do if your organization has no AI use policy?
The absence of a policy isn’t the absence of responsibility. You can take steps to use AI responsibly so you can make better decisions and provide valuable insight when (not if) your organization begins to work on a policy. If your organization is currently developing a policy, get involved. As an early experimenter, your perspective and risk-aware insight will be invaluable to the process.
1. Develop your own responsible use guardrails
If your organization has no AI use policy, you need to create your own guidance. As AI becomes increasingly integrated into our workflows, it’s essential that we make values-based decisions from the start. As AI ethics expert Renée Cummings reminds us, “Data is like DNA. How it’s collected and used determines the kinds of algorithms we can design—and those algorithms increasingly determine access to resources and opportunities.”
With that in mind, we suggest, at minimum:
- Pay for your tools. Never use free versions of AI tools for work purposes. Unlike paid versions, they don’t come with comprehensive data protection options.
- Protect sensitive data. Never input sensitive organizational or stakeholder data into AI tools. If you must work with data, practice rigorous de-identification and verify multiple times. Ensure “training” is turned off and check regularly, as terms of service can change.
- Review your terms of service. Understand what rights you’re granting to AI providers. Check terms regularly.
- Define your values and boundaries. The choices you make—even as an individual—can reinforce or interrupt AI bias. Consider how your values around equity, inclusion, and accountability show up in your personal AI boundaries. What will you and won’t you use it for? For example: “I will use AI to help draft initial outlines, but I won’t use it to write stakeholder communications without significant editing.” Here’s a free (beta) tool to help you think through your personal AI ethos.
- Document your experiments. Keep a running list of ideas to try. Save prompts that work well and ones that fail. Reflect on what you’re learning. Note use cases that might be valuable to your organization later.
2. Find your fellow experimenters (and skeptics)
Experimenting with AI isn’t just about finding ways to be more efficient. Experimentation informs how we understand guardrails, limitations, and where things can go wrong. One of the first things we recommend is to find out who else is experimenting. Sharing your experiments and seeing how others are using it is the fastest, easiest way to learn about and understand AI’s potential uses. Plus, these exchanges build more than skills—they build trust. Learning about AI together invites transparency, vulnerability, and shared learning that strengthens your organization’s readiness for the future.
And skeptics can raise concerns you might not have considered. It’s important to bring them into the discussion to help sharpen and refine your approach to organizational AI use.
3. Start with your pain points
Think about points of friction: where you wish you had more time or a thought partner or sounding board, tasks that feel repetitive, etc. Start experimenting with those small and low-stakes tasks that could be made easier without risking harm. When you discover something useful (or something that doesn’t work), share it with colleagues. This builds collective wisdom and organizational readiness.
4. Stay curious about your concerns
What questions keep surfacing about AI? Commit to learning something new regularly (NTEN and TAG have great resources). Much of the current discourse around AI is polarized—either hype or fear. Seek sources that acknowledge nuance, ethics, and long-term impacts, especially in the context of social good.
5. Document now. Influence next.
When your organization does begin working on a policy, your documentation becomes valuable input. Your use cases can inform pilot programs. Your questions can shape better guidelines. Don’t underestimate the power of modeling thoughtful, values-aligned experimentation. Whether you’re on a program team or in operations, your work can influence how your organization builds trust in and around AI—especially for communities most impacted by its outcomes. If you’re in a position of leadership or influence, your hands-on experience directly informs what the organization tests first and how it thinks about the potential (and risks) of AI for all staff.
One small step at a time
Choose one micro-action this week to help yourself and your organization use AI responsively, even if it currently has no AI use policy:
- Define your “never share” data list and keep it somewhere visible.
- Find one colleague who’s also experimenting and schedule a conversation.
- Document one use case you tried—what worked, what didn’t, what surprised you.
- Review the terms of service for an AI tool you’re considering.
- Identify one pain point in your work and design a small, low-stakes experiment.
Even without an organizational policy in place, you can still navigate AI use responsibly. And that, in turn, will help your colleagues, your organization, and the nonprofit sector ensure responsible AI use.
Photo credit: gilaxia/Getty Images

