Shadow AI: The Governance Gap Facing Nonprofits
I recently ran an AI readiness assessment for a mid-sized nonprofit. Before we started, the leadership team described their AI culture as "very early days"—a few curious staff members experimenting, nothing systematic.
Then we interviewed the staff. 95% were already using AI tools for work. Four different transcription tools across the team. DeepSeek. Custom chatbots people had built to navigate internal resources.
This isn't unusual. It's the norm. And it's exactly what IBM's 2025 Cost of a Data Breach Report describes as "shadow AI"—unregulated, unauthorised AI use that organisations don't know is happening.
Shadow AI Breach Costs - What The Numbers Show
One in five organisations in IBM's 2025 Cost of a Data Breach Report experienced a breach due to shadow AI. Those with high shadow AI usage paid $670,000 more in breach costs than those without.
Shadow AI incidents also compromised more personally identifiable information and intellectual property than typical breaches. And yet, according to Vital City, only 11% of nonprofit staff have received any guidelines about what AI use is or isn't permitted.
Why Impact Organisations Face Higher Stakes
I spent seven years evaluating early-stage innovations for an impact investment fund before starting ImpactAgent. One pattern I saw repeatedly: organisations that handle sensitive data about vulnerable populations face asymmetric consequences when things go wrong.
A fintech startup that leaks customer data faces fines and churn. A nonprofit that leaks beneficiary data faces something worse—a breach of trust with the very people it exists to serve. Nonprofits are the most trusted sector in the US. That trust is the foundation of the work. It's also fragile.
The Crisis Text Line learned this the hard way. When news broke that they'd shared anonymised data from crisis counselling sessions with a for-profit spin-off, the backlash was severe, with people voicing anger and disgust at the betrayal of vulnerable people. Although the data was anonymised, previous studies have indicated it is still possible to trace records back to specific individuals from anonymised data.
Shadow AI creates exactly this kind of exposure. Staff sharing beneficiary details with ChatGPT to draft case notes and emails. Programme teams uploading partner data to AI tools to generate reports. Finance staff feeding donor information into AI assistants to speed up acknowledgement letters.
None of this is malicious, and it's frequently done with the best of intentions given heavy nonprofit workloads—but it creates significant risk.
The Gap Between Policy and Practice
Most organisations I work with have a data privacy policy already in place, and are starting to think through their AI policy.
The AI policy should just be a starting point. It's essential that staff know what it actually means in practice. Staff don't need philosophy. They need answers: can I paste this grant report into ChatGPT? What about meeting notes that mention a partner organisation? Is it okay to use AI for donor communications?
A data classification that distinguishes between public information (fine to use), internal information (probably fine), confidential information (ask first), and restricted information (never)—that's governance. Everything else is good intentions.
The IBM report found that companies using AI extensively *with* proper security saw $1.9 million in savings. The problem isn't AI—it's AI use without structure behind it.
Three Questions to Ask
1. Do you actually know what tools your staff are using?
Not what's approved in the policy, but what they're actually using. The only way to find out is to ask—and to ask in a way that feels safe enough for people to be honest. Leadership surveys rarely surface the full picture, but conversations do.
2. Can your staff articulate what data they shouldn't share with AI?
If the answer is "I'm not sure" or "I assume anything confidential," you have a gap. Staff want to avoid tripping over your organisation's guidelines—but they can only do so if it's clear what those guidelines are.
3. Who owns this?
Who on the team is responsible for knowing whether your AI use is creating risk? If the answer is "no one," that's the first thing to fix.
What Comes Next
Building AI governance that actually works—that enables staff to be more productive while protecting the data you're trusted to hold—is not as simple as approving a policy. It requires understanding and buy-in from your staff, whilst also ensuring they can work with tools that make their day-to-day easier.
If your organisation is using AI without a strategy (and statistically, it probably is), that's not a moral failing. It's a common situation. But it's also a situation that gets more expensive to fix the longer you wait.
An AI Action Plan starts with listening to your staff—understanding what they're already doing, where the risks are, and what governance would actually help.
This post was written with AI assistance—I provided Claude with a detailed prompt and source documents, then edited the AI-produced draft to add personal examples and perspective. Total time saved: approximately 2 hours.