AI Basics for Nonprofits: What You Should Know

By Brandon Sorrell, IT Specialist, The Dayton Foundation

Artificial Intelligence (AI) has rapidly become a part of daily work across many sectors, and the nonprofit sector is no exception. AI tools advertise efficiency and creativity, but their growing use also raises important questions about accuracy, ethics, climate concerns, data security and organizational responsibility. The topic is controversial in many spaces, but the tech industries that drive enterprise are “all in” on AI. Understanding what modern AI is, how it works and the potential impacts of using it are essential for nonprofits seeking to protect their mission while benefiting from this new technology.

Modern AI differs significantly from the rule-based systems that have existed since the mid-20th century. Traditional AI implementations rely on rigid logic. If a specific condition exists, the system executes a predefined outcome. The AI tools gaining attention today, however, are powered by groundbreaking large language models (LLMs) and generative technologies that operate as statistical prediction engines. These systems analyze vast amounts of training data and predict the most likely next word, image or sound in the sequence, like a very advanced autocorrect. Although their outputs can seem sophisticated and even appear intelligent or empathetic, they do not possess true understanding, reasoning or awareness. AI is best described as a highly convincing parrot.

Despite widespread usage in the nonprofit sector, most nonprofits have not yet seen AI fundamentally transform their work. Many organizations are using AI informally for drafting emails, summarizing content and brainstorming, but few have developed formal strategies or governance policies. This gap presents risks, particularly around data leakage and reputational harm. Most folks are using free AI tools for these informal tasks, and these tools typically retain user inputs for future AI model training. This means that anything entered can potentially be reused, exposed or accessed by unintended third parties. Should sensitive donor information, financial data or internal documents be entered into these systems, it’s equivalent to making that data public.

Accuracy and bias represent additional risks to those who use AI. Training data for AI models consists of massive amounts of data, particularly from the internet. As a result, AI is being trained on content that reflects human biases, and such harmful content can sometimes make it into outputs. Because these systems prioritize language plausibility over factual accuracy, hallucinated statistics and incorrect claims can appear polished and convincing. Due to this, AI should never be the final decision maker. Human oversight, fact checking and ethical human judgment remain central to all AI-assisted work.

Some examples of situations where AI should not be used:

  • High-stakes counseling
  • Crisis intervention
  • Legal review
  • Any work involving sensitive or poorly documented topics that require human empathy and accountability

Some low risk applications of AI that nonprofits can use:

  • Turning messy notes into structured meeting minutes
  • Adapting stories or blog posts for multiple social media platforms
  • Assisting with grant writing through structured drafting
  • Reviewing materials from a funder or donor perspective

AI works best as an assistant that helps organize, format and refine content, while humans provide the direction, review and final approval. Responsible AI use in nonprofits depends on clear policies and shared standards. Define your forbidden data (data that should NEVER be input), decide the appropriate times to disclose AI use, add fact checking to your workflow and mandate human oversight in your processes. This will ensure that AI enhances your organization’s mission rather than undermines it.