AI Governance for Not-for-Profits: How to Start Without Stalling

Nic Miller

Candid developed an AI policy from scratch — here's the practical roadmap they shared.

Most non-profit teams know they should have an AI policy. Few have one. And the ones that do often find it sitting in a folder, unread.

At a recent Fundraising Jam session, Astrid Vinje (Senior Contracts and Compliance Manager) and Catalina Spinel (Director of Partnerships) from Candid shared how their organisation built an AI governance framework from scratch. Not as AI experts, but as a peer non-profit that went through the process and learned along the way.

Their experience offers a practical starting point for any fundraising team trying to adopt AI responsibly without dragging it out for six months.

The real risk isn't AI. It's no guardrails.

The session opened with two scenarios. In the first, a grants manager uses AI to draft a proposal using only public information, then staff review and revise it before submission. The proposal wins the grant.

In the second, a staff member uses a free AI tool to draft a major donor appeal and sends it without review. The email contains a factual error. A long-time donor calls to express concern. Credibility takes a hit.

The difference isn't whether AI was used. It's whether anyone checked the output before it went out the door.

One attendee shared a first-hand example: early in their AI experimentation, they asked ChatGPT to draft a grant application using publicly available information. The result sounded great, but it claimed the organisation did things it doesn't actually do. The lesson was immediate. Substantive review is not optional.

Three buckets, not one scary blob

Candid's approach breaks AI governance into three areas:

  1. Risk mitigation. Protecting data, intellectual property and organisational reputation. Monitoring how AI tools evolve. Using training to reduce the chance of misuse.

  2. Governance. Deciding who makes AI-related decisions and how oversight works. For Candid, that meant forming a cross-functional AI governance committee. For smaller organisations, it might mean assigning one or two people to own the topic.

  3. Culture and values. Making sure AI use reflects the organisation's mission and ethics. This includes watching for bias in AI outputs and being honest about what AI can and can't do.

Breaking it down this way helps teams move past the "big scary risk bucket" that stalls adoption entirely.

Build a policy that people will actually use

Candid chose a flexible, simple policy over a rigid, detailed one. Their reasoning: staff need room to experiment, and a short policy is easier to remember.

Their policy focused on a few clear rules:

  • Don't put confidential information or intellectual property into AI tools

  • Don't break any laws

  • Always check the accuracy of the output

  • Make sure the tools you use have data protection in place

A few practical considerations came up in the discussion:

  • Free vs. paid tools matter. Paid tools typically give you more control over privacy and data handling. Review terms of service regularly because they change.

  • A policy on a shelf is not a policy. Communicate it to staff. Run Q&A sessions. Invite input during development so people understand where the organisation stands.

  • Make it a living document. Revisit and revise as AI evolves and your organisation's needs change.

One attendee noted that their policy includes a risk-rated list of tools at the end: low risk, medium risk, high risk. That reference format has been useful in practice.

Start with risks, not templates

Instead of copying a generic AI policy template, Candid recommends starting with your organisation's specific risks. For them, three stood out:

  • Data exposure. Once organisational data enters a model, it's difficult to extract. Protecting donor information and proprietary data was a top priority.

  • Reputational damage. Inaccurate or tone-deaf AI output could erode the trust they've built.

  • Staff misuse. Without clear guidance, well-intentioned staff can create liability.

Each element of the policy should trace back to a real risk. If it doesn't address something specific, it probably doesn't need to be there.

Watch for bias, and call it out

One of the most striking moments in the session was a real example of AI bias. During a Zoom meeting with more women than men, the AI-generated meeting notes disproportionately emphasised contributions from male participants. The team caught it, flagged it in their internal AI Slack channel and used it as a learning moment.

Catalina shared another example: when asking AI for a list of experts in a particular field, the output returned only men from the Global North. The fix was simple. Ask better questions. But you have to notice the problem first.

This is why human oversight matters at every step. AI is a tool, not the final word.

Candid's governance journey

Candid didn't build everything at once. Their AI governance evolved in stages:

  1. AI policy. Guidelines for how staff can use AI.

  2. AI notice. An external-facing page explaining how Candid uses AI in its products.

  3. AI governance committee. A cross-functional group responsible for AI strategy, with representatives from multiple teams.

  4. Community of practice. A voluntary, staff-led space (Slack channel plus regular Zoom sessions) where people share use cases, prompts and learnings.

The community of practice started organically from a Slack channel and grew into structured sessions on specific topics. It serves two purposes: people learn from each other, and the governance committee gets a direct line to how AI is actually being used across the organisation.

What you can do this week

  • Write down your top three AI risks. Be specific to your organisation. Data exposure? Reputational harm? Compliance gaps?

  • Draft or revisit your AI policy. Keep it short. Tie every element to a risk.

  • Identify who owns AI decisions. It doesn't have to be a committee. It just can't be nobody.

  • Start a conversation with your team. A policy works only if people know it exists and understand why it matters.

  • Plan one training touchpoint. Formal or informal, make sure staff have a baseline understanding of what's allowed and what isn't.

This post is based on a session led by Astrid Vinje and Catalina Spinel from Candid at the Non-profit Fundraising Jam. Candid is a tech non-profit that provides comprehensive data on the social sector. Learn more at candid.org.

Get Started

Know who to focus on before you spend your budget.

Dataro gives your team ranked recommendations — a smaller, higher-confidence audience and a clear next step.

United Kingdom

Get Started

Know who to focus on before you spend your budget.

Dataro gives your team ranked recommendations — a smaller, higher-confidence audience and a clear next step.

United Kingdom

Get Started

Know who to focus on before you spend your budget.

Dataro gives your team ranked recommendations — a smaller, higher-confidence audience and a clear next step.

United Kingdom