AI for Funders Webinar Series Q&A

On July 24, 2025, we hosted the first webinar in our series on AI for funders. The webinar was facilitated by Darian Rodriguez Heyman – bestselling author of the newly released book AI for Nonprofits and Founder & CEO of Helping People Help – and expert guest presenter Michael Belinsky, Director at the AI Institute at Schmidt Sciences. Numerous questions arose about AI and its role in the nonprofit sector, so many that we couldn’t address them all during the webinar. This was also the case for the rest of the webinars in the series, so we chose to include complete answers to all questions from the webinars in this article.

Michael Belinsky headshot

“How Funders are Using AI to Multiply Mission Impact”

Are all of us going to be replaced by robots in the coming years? How do you expect this to impact the labor side of philanthropy?

Excellent question. Schmidt Sciences launched a program called AI at work, which funds academic research into what happens as AI is further integrated. The reality is, we don’t know, which is one of the reasons we launched the program. In some cases, AI will likely make certain roles – like software programmers – substantially more productive, which may mean one person could do the work of two. Naturally, leadership will only hire the number of people needed.

Another question we anticipate is what new jobs are going to be created? Whether it’s managing a suite of AI agents doing work on behalf of a company or prompt engineering, we’re already seeing some job descriptions similar to this. Put simply, some jobs may go away, and others will be created, just as they have been with any disruptive technology in history.

Can you explain the difference between AI and LLM?
  • AI (Artificial Intelligence) is the broad field of creating machines or software that can perform tasks that typically require human intelligence, like recognizing speech, making decisions, or translating languages.
  • LLM (Large Language Model) is a specific type of AI trained on vast amounts of text data to understand and generate human-like language. ChatGPT-4 is an example of an LLM.

In short, LLM is a type of AI that is explicitly focused on language.

If you upload grant applications into an AI tool, does that data enter the public domain? And, in general, how can a foundation address privacy and data security concerns while benefiting from AI?

We don’t get into the specifics on data privacy, but different products have different permission settings you can toggle on and off. There are some AI products out there in which you can tell the system, “Don’t use this data to train future models,” or, in other words, “Don’t ingest this data.”

An important question to ask is, “What is your organizational policy on uploading information to this particular AI program or product?” In that case, you need to consider the different versions: a consumer version and an enterprise version. In most cases, an enterprise license prohibits the AI developer from taking that data and doing anything with it. For instance, HIPAA compliance in healthcare, financial regulatory compliance when it comes to banks and investment banks, and so on would all prohibit AI from using any identifying information or personal/financial data. As a result, hospitals use AI knowing that patient data will not go beyond its walls. Banks can use AI knowing that their trading information and other personal information will remain on site.

Of course, the larger, more sophisticated organizations out there develop entire AI systems in-house so that they control everything. For the most part, enterprise versions offer the level of privacy foundations require. Still, it’s essential to consult your IT team and review your data privacy policy.

Can AI be trained to address both the head and the heart, and not just creating rubrics and scoring applications? How do we include relationships and individuals?

Humans should remain in the loop. That said, AI is actually quite good at sentiment analysis. If you give it a particular document, it can derive some of the sentimentality of the writer or the author. At the end of the day, this is a technology, it is trained on a lot of things that humans have produced. So, it has captured much of that sentiment from humans. However, the safe answer: keep the human in the loop.

What are the underlying tools (ChatGPT? Other LLMs?) used to build these tools?

Future House’s public AI is built on top of Claude, which is Anthropic’s model. They’ve brought in software engineers and scientists who take the basic model and then build tools on top of it. Building those tools includes not just writing more software but getting more information and more data.

What AI tools specific to Foundations are recommended? Such as for donor related processes? I read that AI is now useful for relationships between donors and organizations such as recommending causes that interest donors through database analysis, etc.

AI can be used to track donors’ engagement, anticipate their future behavior, and create personalized marketing outreach plans. Some of the most helpful solutions for fundraising are data processors like IBM Watson or Google Cloud AI. These solutions leverage predictive analytics to help anticipate future donor actions. Predictive AI makes predictions based on historical data and patterns. It can help you understand your donors’ giving behaviors and preferences so you can confidently design your fundraising strategy.

  • Examples: DonorSearch AI, Enhanced CORE, ProspectView Online 2, Handwrytten. Generative AI is focused on creation. It can help you generate text, images, video, and other types of original content.
  • Examples: ChatGPT, Gemini, Microsoft Copilot, Claude, Perplexity.
What does an AI ethics task force look like?

An AI ethics task force should focus on the following elements of AI use:

  • Education and awareness:
    • Communicate clearly with people (externally and internally) about the organizational benefits and challenges to be transparent from the start.
    • Understand your purposes for using AI and define ethical boundaries.
  • Transparency:
    • Be honest (internally and externally) about how AI is being used.
    • Be clear with stakeholders about what data you are collecting, how that data is being used, and what benefits are realized because of it.
    • Organizations must understand how AI makes decisions and translate that understanding into something digestible for stakeholders. In other words, make it explainable.
  • Inclusivity
    • Make sure AI is representative of society by including women and people of color in the process of working on AI.
    • Control for bias by training algorithms to reduce things like racial and gender biases.
  • Follow regulations
    • Create an ethics council that evaluates each use case for ethical concerns.
    • AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

What kinds of ethical concerns should foundations pay attention to, as it relates to AI use? And, are there examples of useful policies that people can benefit from?

The ethical concerns are multiple, and I certainly won’t be able to cover them all. For more in depth information and best practices on this topic, refer to Foundant’s webinar on AI Guardrails.

Examples:

First, is a category relating to the organizations creating AI and the products. To what extent are they using energy and affecting the climate? Who are these organizations? What are their general priorities? Do they have policies in place you can align with? And, so on.

Next, there are concerns relating to the product data. When it comes to submitting data to these organizations, how is that data used to train their models? What is their policy about data?

Then, there is the information you receive from the models. Is it appropriate to use it and appropriate to trust it? To what extent should you rely on it? Is it the starter or the final answer in your workflow? How are you thinking about that? And then, of course, how might this impact your beneficiaries? What awareness do you owe to the nonprofits about how you’re using these tools? Again, questions worth asking and answering.

Finally, foundations should consider in their day-to-day use of AI relative to privacy/data collection, algorithm bias, and accountability.

Many artificial intelligence models are developed by training on large datasets. The data can be misused or accessed without authorization. Best practices for foundations mitigating this risk include:

  • Avoiding entering sensitive personal or financial data into AI systems
  • Conducting regular risk assessments
  • Collecting only the minimum necessary data
  • Confirming and documenting use consent
  • Encrypting data during storage and transmission
  • Anonymizing or pseudonymizing data where possible
  • Limiting access to sensitive data and reporting on data collection and storage practices.

Algorithm bias: AI systems can be biased, producing discriminatory and unjust outcomes pertaining to hiring, lending, law enforcement, health care, and other important aspects of modern life. These biases typically arise from the training data used. If the training data contains historical prejudices or lacks representation from diverse groups, then the AI system’s output is likely to reflect and perpetuate those biases. Best practices for preventing algorithm bias include using diverse and representative data, implementing mathematical processes to detect and mitigate biases, developing transparent, explainable algorithms, adhering to fair, ethical standards, and engaging in ongoing learning.

Accountability: this concern stems partly from the lack of transparency in how AI systems are built. Many AI systems, especially those that use deep learning, operate as “black boxes” for decision-making. AI decisions are frequently the result of complex interactions with algorithms and data, making it difficult to attribute responsibility. Best practices to ensure accountability include following ethical design principles that prioritize accountability, documenting the responsibilities of all stakeholders, and ensuring that system design includes meaningful human oversight.

We are seeking AI tools that ingest handwritten notes (from site visits), create one-pagers of learnings, and then publish that one pager to the team. Do you have any insight into tools that do this well?

There are key technological components that a software would need to possess in order to have these capabilities. These include computer vision (to analyze the visual aspects of handwritten text), deep learning (neural networks for pattern recognition), and natural language processing (to understand context and improve accuracy in text interpretation).

Example of software that fits these needs includes:

  • Mazaal: simple pre-trained AI model that offers full and simple AI handwriting recognition inside its AI automation platform – free to use.
    • Specialized in recognizing handwritten text in images
    • High accuracy in converting handwritten documents to digital formats
    • Supports 300+ languages
    • Further automation feature to handle 100s of images.
  • Google: brings the power of its vast AI capabilities to handwriting recognition.
    • Seamless integration with Google’s ecosystem
    • High accuracy rates for diverse handwriting styles
    • Developer-friendly with extensive documentation
    • Offers customization options
  • Instabase Handwriting Recognition:
    • Designed for enterprise-level document processing
    • Integrates with existing business workflows
How would a foundation use AI most effectively for place-based philanthropy?

Place-based philanthropy involves significant community engagement. AI can be used to increase these engagement efforts and create community assessments. AI can process and analyze large datasets including social media, surveys, and user interactions. This data analysis can provide you with valuable insights into member behavior, preferences, and trends, helping you tailor engagement strategies accordingly.

Humanized chatbots and virtual assistants can provide real-time support to community members, promptly addressing their queries and concerns. With a well-programmed and maintained chatbot, you can easily improve your overall community support experience and ensure members feel heard. AI even helps identify new online community members based on their interests and online behavior, enabling you to easily recruit for panels or focus groups.

An example software that may fit these needs is FranklyAI. This platform actively listens by conducting user-led discussions. It’s much more conversational than the standard chatbot as it is programmed to the needs of your community and consultation.

How is AI being used to code data? For example, would having AI analyze and categorize set grants into specific focus areas be a recommended use case?

That would be an excellent use case. AI code generation uses large language models (LLMs) trained on huge datasets, using them to learn patterns between human input and code output, so when you give them a prompt, they predict the most likely, most useful code in return. Once the pattern is predicted, the model builds the code token by token. Modern models use attention mechanisms to focus on the most relevant parts of your prompt, making output more accurate and context aware. Software solutions that can assist in data coding include GitHub Copilot, Cursor, Codeium, and Bolt.new.

We, as a funder, are receiving far too many requests for funding, and do not trust AI to analyze the nuance of our grant applications to find a fit. Can you speak to these concerns? All these AI webinars are super supportive of what it can do, but it all sounds like it will remove people from the nonprofit sector, while we are already facing extensive competition in the job market.

It is correct that AI lacks the nuance and emotional intelligence of humans. AI should not be used to replace human judgement; it should be used to relieve the burden of administrative tasks. Final decisions should always be made by humans. As mentioned in the first question above, in some cases AI automates jobs depending on the industry. There may also be new jobs emerging that will be created by AI. Look back in history. What emerging technologies “took away” jobs? And, as a result of those same technologies, what new jobs were created?

Is it worth buying the “premium” of any AI platforms?

It would depend on your needs. Free tools are a great way to explore AI with no upfront cost. They’re best for casual users, students, or small projects. But when time, scale, or output quality matters, paid AI tools deliver serious value. Whether you’re designing brand assets, writing content, or building virtual environments, premium features often save more time and offer better results.

As AI tools become increasingly integral to various industries, determining when to invest in a premium plan is a decision that requires careful consideration. AI consultants recommend using a “4P Framework” to evaluate whether a paid AI tool is truly worth the investment: Precision, Privacy, Productivity, and Profit. This helps professionals assess the true value of AI tools and their impact on workflow, compliance, and revenue generation.

How do you ensure confidentiality of content you provide AI to assess/report on? How do you maintain confidentiality of the applicant’s information if you’re dropping the proposals into AI?

Best practices for ensuring confidentiality in AI content include:

  • Conducting risk assessments
  • Limiting data collection
  • Seeking and confirming consent
  • Following security best practices
  • Providing more protection for data from sensitive domains
  • Reporting on data collection and storage

Always read security measures for software and never input personal info into AI software. Create a fake scenario if necessary so you can strictly analyze the data without compromising personal information. As always, quality check all data before finalizing.

If I am interested in training myself and my staff on how to use AI more effectively in our philanthropic work, are there courses or programs that you recommend?

Of course! Here are some options for courses/programs.

  • · Unlocking AI for Nonprofits – NetHope & Microsoft
    • Format: Free, self-paced online courses
    • Who it’s for: Beginners to advanced users
    • Platform: Kaya
  • PLAI – AI Courses for Nonprofits, NGOs, and Social Sector Professionals
    • Format: Immersive workshops, coaching, and masterclasses
    • Audience: From volunteers to C-suite leaders
    • Website: PLAI Learning H
  • · NTEN – AI for Nonprofit Decision-Makers
    • Format: Online course (~60 minutes of video content)
    • Website: NTEN Course Page
  • · FormusPro – AI Academy for Nonprofits
    • Format: Free training sessions and on-demand classes
    • Audience: All staff levels, including leadership
  • Microsoft Learn – AI Skills for Nonprofits
    • Website: Microsoft Learn
    • Format: Beginner-friendly learning path
    • Focus: Using Microsoft 365 Copilot across Word, Excel, PowerPoint, Teams
If we don’t want to feed sensitive application details into a public AI and we can’t afford to create our own LLM, is there a way to use AI that doesn’t put those application details out into the world?

You can use a private or on-premises AI solution, such as a self-hosted model. You can run open-source models like LLaMA, Mistral, or GPT-J on your own servers or cloud infrastructure. This keeps all data within your control and avoids sending anything to external providers. It requires some technical setup, but there are managed services that simplify this. Many cloud providers offer enterprise-grade AI tools that do not train on your data, do not store your inputs, and comply with privacy regulations. Another option is to use Retrieval-Augmented Generation (RAG) If you want to use AI for internal knowledge or applications you can combine an LLM with a private document store (like a vector database). This model retrieves relevant info from your internal data without ever training on it. You can also use local AI assistants like LM Studio, Ollama, or PrivateGPT that let you run models locally on your machine. Finally, check “No Data Retention” policies when using public AI tools.

Nathan Chappell headshot

“Creating Policies That Align Technology with Your Mission”

On September 4th, 2025, we hosted the second webinar in our series on AI for funders. The webinar was facilitated by Darian Rodriguez Heyman – bestselling author of the newly released book AI for Nonprofits and Founder & CEO of Helping People Help – and expert guest presenter Nathan Chappell, Chief AI Officer at Virtuous. This webinar discussed governance and acceptable use standards for AI.

What are the key criteria you recommend for the “sniff test” to determine whether an AI platform is legit?

Nathan developed a vendor questionnaire that lives on Virtuous’ website.

At the end of the day, because we operate in the currency of trust, we have to take an approach that’s do not trust, then verify. A lot of organizations use that as their values: trust and verify, assume best intent, right? But in AI, when we operate in the currency of trust, we have to verify, then trust. It’s hard to tell if someone’s lying to you. But for all intents and purposes, from a data security perspective, there are certifications, like SOC 2, that meet a very rigorous standard for security. It comes around transparency and explainability, and that’s what you’ll find in fundraising.ai. The framework looks very different than Microsoft or Amazon or Salesforce but if they operate in selling software or services or products, the stakes are different. It’s best to buy AI from organizations that are fully transparent. So, in predictive AI, if you’re using AI to predict donors, it’s imperative that you demand that your models are transparent, meaning they’re explainable and the math of those models can be shown to you. If they won’t show you, you shouldn’t hire them. That’s with predictive AI; Generative AI is much harder. You have to put a lot more trust in the organizations like Microsoft, AWS and Meta etc., when using their models, because generative AI is not very transparent. OpenAI says that they’re very trustworthy and that you should be able to trust them, but Google is also suing them for stealing 100 million hours of YouTube videos, and so no organization is without its flaws. Take caution, and make wise decisions, and then be willing to adapt your governance and acceptable use policy.

Do you have recommendations for organization-wide AI trainings?

It’s a bit overwhelming, and it’s honestly why Nathan started doing a podcast, two years ago, called fundraising.ai, which is on Apple, YouTube, Spotify. There’s so much information about AI that becomes completely overwhelming, but not a lot of it has direct application to the nonprofit sector, and so that’s what he tries to do on our podcast, but the thing is to create a regiment where you’re just continuously learning, because if you don’t, and you wait, everything has changed.

Find someone that you resonate with and then follow them. There are a few people Nathan recommends:

  • Ethan Mollick, who wrote a book called, Co-Intelligence. Co-Intelligence is a great book that kind of demystifies what AI is and isn’t. The Coming Wave is a great depiction of the future and what’s at stake.
  • Allie K. Miller is a balanced individual that, even though she doesn’t focus on the nonprofit sector, her work is very aligned. She’s highly responsible in her approach to using AI, and essentially, how to use AI to unlock the potential of your organizations.
Can you speak at all to the environmental cost of AI? I’m not sure if what I’ve read about its energy usage is true or not. If it is accurate, how should we approach that in our governance policies?

Energy is the biggest roadblock right now for the full advancement of AI and where AI companies want to go. You can’t dismiss the fact that there’s a tremendous environmental impact by building these power plants, and that’s not unique to the U.S, that’s globally. But at the same time, in measuring that impact, AI will have the most formative role in helping create clean energy. AI will actually be what solves the energy crisis through cold fusion, which is now being re-experimented with. Most likely, these data centers will move into space anyway, and so this is a temporary issue. If you’re saving 20%, 30%, 40% of your time, the dividend that you receive back in energy is profound, so this is why it’s not an easy answer, but it comes up every single time. Do not let the environment be why you don’t use AI. If you are concerned about the environment, or you’re an organization that funds environmental causes, there are a million other ways to offset your carbon footprint in this in-between time. Focus on those other ways to offset your carbon footprint and not let the environment be an excuse to not use AI.

How would you recommend addressing concerns in order to create a culture of innovation with AI?

The biggest thing you can do as person who is trying to influence up is find the highest-yield but lowest-risk problem, get together with a few folks, and solve one of those issues with AI, and you’ll get the attention of a leader. Then you’ll move from that high-risk, low yield to, medium risk, and move up that way. You have to influence from within, but you have to show the value.

How would you address how to enforce a policy, e.g., if staff are caught breaking policy guidelines?
  • Clarify the Policy
    • Clearly written and accessible.
    • Communicated regularly through onboarding, training, and reminders.
    • Understood by staff, with opportunities to ask questions or seek clarification.
  • Establish Enforcement Procedures
    • Documented steps for investigation and response.
    • Defined roles for who handles enforcement (e.g., HR, supervisor).
    • Consistency in how violations are addressed to avoid favoritism or ambiguity.
  • Respond to Violations
    • Investigate fairly: Gather facts, hear all sides, and document findings.
    • Assess severity: Was it a minor oversight or a serious breach?
    • Apply consequences: These should be proportionate and aligned with the policy. Examples include:
    • Verbal or written warnings
    • Mandatory retraining
    • Suspension or termination (for serious or repeated violations)
  • Communicate Clearly
    • Be respectful and direct in communication.
    • Explain the violation, the impact, and the consequence.
    • Offer a path forward (e.g., improvement plan, support resources).
  • Review and Improve
    • Periodically review enforcement outcomes.
    • Adjust policies or procedures if patterns of misunderstanding or noncompliance emerge.
  • Follow Up
    • Monitor behavior after enforcement.
    • Provide support or coaching if needed.
    • Reiterate the importance of the policy and its role in organizational culture.
We have never used AI. Is the free version of ChatGPT adequate? Is that the best option to start with?

The reality is, if you’re not paying for the product, you are the product. It’s important to remember that. When you’re paying for a product, what you’re gaining is access to a little bit better, faster features, but mainly you’re gaining access to security. In a paid version of Chat, or Gemini, or Anthropic Cloud, you’re getting more access to ensure that you can turn on and off your privacy settings so that you’re not training and retaining new models based on your data. It’s important to understand those privacy settings. They’re not super complicated to turn on or off. If in doubt, don’t put any confidential information into a large language model that you’re not aware of. If you’re not familiar with the privacy settings for any group or any product, then don’t do it. Just err on the side of caution there. But if the stakes are very low and you’re not using confidential information, free products are great. At the end of the day, it’s only about $20 a month.

What do you think about putting confidential information into a paid AI tool (ie Enterprise or Team accounts of ChatGPT)?

It is never recommended to enter confidential information into any AI tool. The best practices for these situations include:

  • Use Enterprise-grade tools only (not free/public versions).
  • Avoid inputting PII, financials, or donor data unless absolutely necessary.
  • Train staff on what’s safe to share and what’s not.
  • Use anonymization or redaction techniques when possible.
  • Review vendor documentation on data handling and retention policies.
With so many new startups entering the market, such as Virtuous, how can we be confident that the privacy and security commitments they make are both accurate and truly effective in protecting our data?
  • Use Industry Frameworks
    • OWASP AI Security & Privacy Guide: Offers actionable steps for evaluating AI systems, including data minimization, fairness, and transparency.
    • NIST Privacy Framework and ISO 27701: These help assess whether a vendor’s practices align with global privacy standards.
  • Check for Independent Audits
    • Ask if the company has undergone third-party audits for SOC 2, ISO 27001, or other certifications.
    • Verify if their HIPAA and GDPR assessments were conducted by reputable firms and request summaries or attestations.
  • Review Sub-Processor Lists
    • Virtuous maintains a list of sub-processors at trust.virtuous.org, which should include privacy commitments from vendors they rely on.
  • Understand State-Level Laws
    • U.S. states like California, Colorado, and Virginia now require opt-in consent, data minimization, and privacy risk assessments for AI systems.
    • Ensure Virtuous complies with these if your organization operates in or serves residents of those states.
  • Consult legal counsel to validate compliance with your specific obligations (especially if you handle donor or health-related data).
  • Request a Data Protection Impact Assessment (DPIA)
  • Ask for breach notification protocols and historical incident reports.
  • Include privacy clauses in your contract that specify data handling, retention, and breach response.
We are in the middle of strategic planning and preparing for a phase focused on how we enable our strategy through the use of AI tools/agenda/etc. What is your best advice on how to maximize the planning process?
  •  Start with Strategic Alignment
    • Clarify your goals: What are the core strategic objectives AI should support? (e.g., donor engagement, operational efficiency, impact measurement)
    • Map AI capabilities to strategy: Use a matrix to link AI tools to specific strategic outcomes.
  • Build a Cross-Functional AI Task Force
    • Include voices from marketing, IT, programs, fundraising, and legal.
    • Ensure representation from data governance and ethics to guide responsible AI use.
  • Audit Current Tools & Data Readiness
    • Inventory existing tools and platforms.
    • Assess data quality, accessibility, and privacy compliance.
    • Identify gaps where AI could add value (e.g., predictive analytics, automation, personalization).
  • Define Use Cases with Impact Potential
    • Prioritize use cases based on:
      • Strategic relevance
      • Feasibility
      • Risk level
      • ROI or mission impact
    • Examples:
      • AI for donor segmentation and personalized outreach
      • NLP tools for grant writing or impact reporting
      • Chatbots for volunteer or donor support
  • Establish Governance & Ethical Guardrails
    • Create an AI ethics framework tailored to your values.
    • Define policies for data privacy, bias mitigation, transparency, and human oversight.
    • Include training plans for staff to use AI responsibly.
  • Create a Phased Implementation Roadmap
    • Break it into:
      • Pilot phase: Test 1–2 high-impact use cases.
      • Scale phase: Expand successful pilots.
      • Optimize phase: Refine processes and measure outcomes.
    • Include milestones for:
      • Tool selection
      • Staff training
      • Data integration
      • Evaluation metrics
Althea Hannemann headshot

“How Foundation Boards Can Lead with Values in the Digital Age”

Coming soon!

AI is rising in the philanthropic sector, and there is uncertainty that comes with this change. We hope that this Q&A provides more clarity on the topic of AI for funders. Consider using AI to help you streamline processes and reduce administrative burden, so you can have your time back to focus on your mission and impact. Utilize these insights and tips to revolutionize your organization with AI.

Start a conversation to learn how Foundant can make your funding process easier.