Ethical AI Use for Nonprofits: Key Considerations to Building Trust 

AI offers immense potential for nonprofits operations, from streamlining administrative tasks to enhancing program delivery. However, its use comes with important ethical considerations, particularly for organizations committed to social good. Nonprofits must ensure that AI adoption aligns with their mission, values, and the trust they’ve earned from the public.

In 2024, nonprofits were the most trusted sector in the US, and Americans trust nonprofits to reduce national divisions more than they trust corporations, government, or media. Preserving this trust is critical. Ensuring ethical use of AI by nonprofits is instrumental, as a majority of Americans say their trust in a nonprofit would increase if it committed to third-party standards for ethical operations and good governance practices.

This post explores key ethical considerations nonprofits should keep in mind when implementing AI, provides practical guidance, and highlights real-word examples to guide responsible AI usage. 

Part I: Key Ethical Considerations

  1. Data Privacy and Security: Safeguarding sensitive information

Nonprofits often work with sensitive personal data—from donor information to beneficiary details. Nonprofits must ensure that data privacy and security are top priorities. This means adhering to strict regulations like GDPR and ensuring informed consent for how data is collected, used, and stored.

Example: Crisis Text Line, a nonprofit focused on mental health support, received widespread backlash when it shared its anonymized data with Loris.ai, its for-profit spin-off that develops AI for customer service teams. Despite assurances that the data was anonymised, concerns arose about the privacy of vulnerable individuals. Crisis Text Line’s service connects people over text with counsellors who can provide support on issues such as depression, self-harm and suicide. Previous studies have indicated it is still possible to trace records back to specific individuals from anonymised data.

Suggestions:

  • Define clear policies on data collection, storage, and sharing by AI tools

  • Regularly review data protection measures to ensure compliance and safety.

    2. Bias and Fairness: Ensuring Equity in AI Decisions

AI systems learn from data, but if the data is biased, the AI system will be too. This can result in decisions that perpetuate discrimination and inequality. Nonprofits must be vigilant in monitoring AI systems for bias, using diverse data sources for training, and ensuring transparency in AI-driven decisions.

Example: In 2019, a Dutch government agency used an algorithm for risk profiling that was found to discriminate based on race and socioeconomic status. This happened because the criteria in the algorithm risk profile (education, age and distance to parent(s)’s address) were correlated with race. Furthermore, the algorithm scored automatically vocational training as hiring risk, which was stigmatising. Amnesty International found similar discriminatory algorithmic systems have been uncovered in France and Denmark, highlighting the importance of regular audits and bias mitigation in AI systems.

Suggestions:

  • Assemble a diverse task force or advisory committee to monitor AI tools for bias and transparency.

  • Audit AI systems regularly and ensure diverse representation in training data.

    3. Accountability and Transparency:

When using AI, nonprofits must establish clear accountability structures. This includes ensuring the decision-making processes of AI systems should be documented and auditable, and transparent communication about AI usage policies with stakeholders to trust.

Example: A nonprofit using AI for beneficiary selection should clearly communicate the criteria and process involved, including documenting the prompts and AI tools in the process of beneficiary selection.

Suggestions: 

  • Establish a practice to notify when AI is used to generate outputs, including documenting the prompts and the systems used to generate that output.

  • Keep stakeholders informed about when and how AI is being used in your organization.

    4. Human Oversight: Augmenting, Not Replacing, Human Judgment

AI can automate tasks, but it should never replace human judgment entirely—especially when it comes to sensitive decisions. Nonprofits must ensure that human oversight is a part of the process, especially in high-stakes areas like child welfare or healthcare.

Example: The Allegheny Family Screening Tool (AFST) was used to flag cases of potential child abuse. Although the system was found to produce racially disproportionate scores, human case workers were able to put the algorithmic risk score in the context of holistic risk assessments; this ‘human in the loop’ resulted in reduced racial bias in the final outcomes.

Suggestions:

  • Implement processes for human review and oversight in critical AI decision-making areas.

Part II: Resources in drafting responsible AI policies

As your nonprofit explores the responsible use of AI, here are some resources and tips:

  • Ethical AI Toolkits and templates: 

  • As your organization develops your AI policy, keep in mind the following process steps:

    • Community Engagement: Involve beneficiaries, staff, and other stakeholders in discussions about AI implementation.

    • Legal and Regulatory Compliance: Consult with legal experts to ensure that AI practices comply with relevant data protection and privacy laws.

    • Ongoing Evaluation: Regularly assess the impact of AI systems on your beneficiaries and communities, and adjust policies as needed to address emerging ethical concerns.

Final Thoughts: Ethical AI as a Trust-Building Opportunity

The ethical use of AI can help nonprofits streamline their operations, improve services, and achieve greater impact. However, it’s essential that nonprofits take the necessary steps to ensure that AI is used responsibly and in alignment with their mission. By addressing key ethical considerations—such as data privacy, fairness, transparency, and human oversight—you can build trust with your stakeholders and create an AI strategy that truly serves your community.

Nonprofits are uniquely positioned to lead the way in ethical AI adoption. By committing to transparent, accountable, and inclusive AI practices, you can harness the power of AI while safeguarding the trust and values that define your work.


Previous
Previous

Why Development Organizations Need an Action Plan for AI Adoption

Next
Next

AI Innovation for Nonprofits - 5 Key Funders to Know