Artificial Intelligence (AI) has arrived in the Not-for-Profit (NFP) sector (76% of charities1 are using it), and the atmosphere feels a lot like the start of the popular TV show, The Traitors.
AI promises a huge prize pot of efficiency gains, increased income and greater impact, but lurking among Faithful AI are hidden risks (the traitors) that could betray an NFP’s mission, compromise public trust, and destroy its core values.
The challenge for NFP leaders today is not if AI should be used, but how to ensure that every AI tool invited into the castle is a Faithful and not a hidden Traitor ready to sabotage our work.
-
The opening ceremony: Why adoption is stalled
While the buzz around AI is high, strategic adoption in the NFP sector is moving at a snail’s pace, with only 2% of UK charities1 leveraging AI at a truly strategic level and 74% of large UK charities1 worried about the implications of using AI.
The Traitors' secret weapons: Barriers to responsible AI
The following risks act as hidden Traitors, preventing NFPs from successfully completing their AI Missions and accumulating funds in the prize pot:
- The inaccuracy imposter: AI that makes up facts, leading to errors and potential reputational damage as supporters, beneficiaries, and donors are given inaccurate advice or information, risking them not returning to the charity again.
- Personal data plunderer: Where employees or volunteers input sensitive, confidential data on beneficiaries/vulnerable people into public AI tools, constituting an unauthorized disclosure and a breach of UK GDPR.
- The skills saboteur: Lack of technical and strategic knowledge among employees, executives and trustees to oversee AI safely or know where to start to truly unlock the potential value.
- The budget blocker: Financial constraints prevent investment in secure enterprise AI tools, leaving charities vulnerable to lower-grade, riskier solutions.
2. Defining the Faithful: What responsible AI looks like
For AI to be considered faithful to the mission, it must adhere to the following principles, making it difficult for an AI Traitor to blend in.
A. Mission alignment is the vow
AI must be adding value - a common pitfall we see is charities believing they need to quickly jump to AI tool selection, as they feel they are behind others in adopting AI. However, it is important to assess the need for AI against the outcome it will deliver and therefore the end impact, challenging if it helps to advance the charity toward achieving its mission, instead of being a shiny tool which won't deliver any value.
B. Human centricity is the shield
AI adoption is a catalyst for people-powered change that fundamentally alters team roles and dynamics. The purpose of AI is to enable people to be effective, not redundant.
- Reframing AI: AI can help to reduce the manual effort on mundane, repetitive and manual activities (e.g entering donor data, summarizing beneficiary case work, researching and qualifying grants). This allows employees and volunteers to really focus on complex activities that require personal, one-to-one interaction (e.g. spending more time providing tailored advice to beneficiaries).
- Human-in-the-loop: We must ensure the valuable human elements of empathy, compassion, complex judgment, and crisis management - remain in the final decision-making process. AI must complement, not replace, core human qualities like trust and lived experience (frontline insight).
- Put people first: To successfully manage AI transformation, leaders should engage their people from the beginning to understand their priorities, challenges, frustrations, and motivations. This ensures AI implementation addresses real pain points, maximizes adoption, and ultimately delivers better services that are profoundly more human.
C. Transparency is the pledge
Trust is the currency of the NFP sector. To prevent damaging public trust, we must be clear about when and how AI is being used in operations and services. Transparency is the essential shield against suspicion.
- Earn trust: Trust is earned through openness and effective communication. It is important to proactively engage everybody (employees, supporters, donors, volunteers and beneficiaries), explaining where the AI stops and where the human remains. Leaders must communicate early, often, and from the top, framing AI as an enhancement, not a replacement.
- No secret endorsements: We must disclose when AI has been used (e.g. in drafting a grant application or where an AI chatbot is helping someone trying to donate online), ensuring people never feel like they have been misled or dealing with an unseen algorithmic force.
- No hidden secrets: Teams must establish feedback loops that build trust through quick, honest, and situational feedback regarding AI performance and safety. This ensures accountability, continuous improvement, and honest conversations about how AI is helping versus hindering.
D. Data quality is the foundation
Data quality is the foundation for AI in the NFP sector. Poor, incomplete, or inaccurate data is the definition of a Traitor; it will automatically produce biased, unfair or inaccurate outcomes.
- Data governance and hygiene are non-negotiable: Poor data (siloed, incomplete, or biased) risks inaccurate outcomes and treating end supporters, donors, and beneficiaries unfairly or inappropriately. AI tools can help to significantly improve data quality through automated data cleansing, transformation and monitoring.
- Informed consent: Informed consent must be obtained from volunteers, donors and beneficiaries for the processing of their data by AI, respecting their privacy above all else, and adhering to strict data security protocols.
3. The round table: Achieving responsible AI
To unmask the Traitors and build a high-impact AI capability, NFP leaders must consider the following in aid of responsible AI:
A. Establish an AI policy and guardrails
Responsible AI starts with formalizing the rules for your specific charity - to protect the mission, finances, and beneficiaries from legal and ethical harm. These guardrails define what gets banished from the organization's tech stack.
-
A. Define the policy and mandate
- Mission-first mandate: State that your AI policy's primary purpose is to protect the charity and beneficiary trust and that all AI use must directly align with your charitable goals, purpose and mission statement.
- Oversight and approval: Design an AI Governance body/Council responsible for setting the direction for AI across the entire charity - ensuring AI is aligned with the mission and values and is legally and ethically compliant.
- Approved tools: Maintain a register of vetted and approved AI technologies that meet security and ethical standards for staff use.
-
B. Implement non-negotiable red lines
- Data protection: Be clear how data can and can’t be used, e.g. never input personal or sensitive data into public, third-party generative AI tools.
- Human accountability: Prohibit AI from making final decisions in high-risk scenarios, building in human review of critical AI-assisted outcomes.
- Draw the absolute red line: Clearly state specific instances/scenarios where AI can’t be used, so there is no doubt from employees and volunteers (e.g. providing complex mental health support to a vulnerable person must always be done by a human, not by an AI chatbot).
B. Build knowledge and train the human skillset
We must stop the Traitors from recruiting the Faithful through ignorance or fear by equipping employees and volunteers with the confidence to manage AI effectively.
- Roll out AI training: Education must go beyond tool use to cover responsible use, risk recognition, and ethical considerations.
- Master prompt engineering: Invest in training staff on the art of asking good questions. Using AI well requires negotiation and refinement.
- Encourage autonomy and mindset change: Train beyond the technology, supporting core human skills needed to best unlock the value of AI: conceptual thinking, critical questioning, and problem-solving. Shift the thinking from 'we've always done it this way' to 'How can we do this simpler, better and quicker with AI?'
C. Create an AI-curious and experiment-driven culture
Leaders must act like the Traitors meeting in The Turret - strategizing in secret to plan how to win the game, but doing so for the greater good.
- Communicate and share: Frame AI as an enhancement, not a replacement. Run a leadership-led AI roadshow where leaders demonstrate how they personally use AI to succeed, role modelling and encouraging others to use AI.
- Give the space and autonomy to employees: Get front-line employees to identify key use cases for AI based on their day-to-day challenges and problems, as they are closest to the end beneficiary/donor/supporter. There is most likely a myriad of shadow AI already being utilized to serve customers better; however, also a fear of making this visible.
- Identify, recognize and reward individual ‘AI champions’ to help increase adoption as they become advocates for AI tools - influencing their colleagues.
- Build knowledge-sharing communities: Create communities of AI-curious employees who meet monthly to share wins, lessons, and ethical discussions. Encourage peer-led demos and “failure stories” to normalize experimentation.
By adhering to these principles and holding AI accountable, the NFP sector can successfully navigate the shadows of suspicion. We can ensure that our AI is always a Faithful - dedicated to amplifying our mission, protecting our values, and delivering the greatest possible impact for the people who need us most.
Ready to unmask the Traitors in your organization?
Don't let the hidden risks of AI betray your mission. If you're struggling with adoption roadblocks or need a clear strategy to implement the Faithful principles, our experienced AI and change consultants can help. Learn more about our services here: AI Consulting | Not for profit consulting
References
1. Charity Digital Skills Report (2025) Artificial Intelligence