AI for Funders Webinar Series Q&A

On July 24, 2025, we hosted the first webinar in our series on AI for funders. The webinar was facilitated by Darian Rodriguez Heyman – bestselling author of the newly released book AI for Nonprofits and Founder & CEO of Helping People Help – and expert guest presenter Michael Belinsky, Director at the AI Institute at Schmidt Sciences. Numerous questions arose surrounding AI and its place in the nonprofit sector, so many, we didn’t have time for all of them during our allotted time. This was also the case for the rest of the webinars in the series, so we chose to include complete answers to all questions asked in these webinars in this article.

Michael Belinsky headshot

“How Funders are Using AI to Multiply Mission Impact”

Are all of us going to be replaced by robots in the coming years? How do you expect this to impact the labor side of philanthropy?

Excellent question. Schmidt Sciences launched a program called AI at work, which funds academic research into what happens as AI is further integrated. The reality is, we don’t know, which is one of the reasons we launched the program. In some cases, AI will likely make certain roles – like software programmers – substantially more productive, which may mean one person could do the work of two. Naturally, leadership will only hire the number of people needed.

Another question we anticipate is what new jobs are going to be created? Whether it’s managing a suite of AI agents doing work on behalf of a company or prompt engineering, we’re already seeing some job descriptions similar to this. Put simply, some jobs may go away, and others will be created, just as they have been with any disruptive technology in history.

Can you explain the difference between AI and LLM?

  • AI (Artificial Intelligence) is the broad field of creating machines or software that can perform tasks that typically require human intelligence, like recognizing speech, making decisions, or translating languages.
  • LLM (Large Language Model) is a specific type of AI trained on vast amounts of text data to understand and generate human-like language. Chat GPT-4 is an example of an LLM.

In short, LLM is a type of AI that is explicitly focused on language.

If you upload grant applications into an AI tool, does that data enter the public domain? And, in general, how can a foundation address privacy and data security concerns while benefiting from AI?

We don’t get into the specifics on data privacy, but different products have different permission settings you can toggle on and off. There are some AI products out there in which you can tell the system, “Don’t use this data to train future models,” or, in other words, “Don’t ingest this data.”

An important question to ask is, “What is your organizational policy on uploading information to this particular AI program or product?” In that case, you need to consider the different versions: a consumer version and an enterprise version. In most cases, an enterprise license prohibits the AI developer from taking that data and doing anything with it. For instance, HIPAA compliance in healthcare, financial regulatory compliance when it comes to banks and investment banks, and so on would all prohibit AI from using any identifying information or personal/financial data. As a result, hospitals use AI knowing that patient data will not go beyond its walls. Banks can use AI knowing that their trading information and other personal information will remain on site.

Of course, the larger, more sophisticated organizations out there develop entire AI systems in-house so that they control everything. But for the most part, enterprise versions allow for the kind of privacy that a foundation would want but, again, check with your IT team and check your data privacy policy about it all.

Can AI be trained to address both the head and the heart, and not just creating rubrics and scoring applications? How do we include relationships and individuals?

Humans should remain in the loop. That said, AI is actually quite good at sentiment analysis. If you give it a particular document, it can derive some of the sentimentality of the writer or the author. At the end of the day, this is a technology, it is trained on a lot of things that humans have produced. So, it has captured much of that sentiment from humans. However, the safe answer: keep the human in the loop.

What are the underlying tools (ChatGPT? Other LLMs?) used to build these tools?

Future house and their public AI is built on top of Claude, which is Anthropic’s model. They’ve brought in software engineers and scientists who take the basic model and then build tools on top of it. Building those tools includes not just writing more software but getting more information and more data.

What AI tools specific to Foundations are recommended? Such as for donor related processes? I read that AI is now useful for relationships between donors and organizations such as recommending causes that interest donors through database analysis, etc.

AI can be used to track donors’ engagement, anticipate their future behavior, and create personalized marketing outreach plans. Some of the most helpful solutions for fundraising are data processors like IBM Watson or Google Cloud AI. These solutions leverage predictive analytics to help anticipate future donor actions. Predictive AI makes predictions based on historical data and patterns. It can help you understand your donors’ giving behaviors and preferences so you can confidently design your fundraising strategy.

  • Examples: DonorSearch Ai, Enhanced CORE, ProspectView Online 2, Handwrytten. Generative AI is focused on creation. It can help you generate text, images, video, and other types of original content.
  • Examples: ChatGPT, Gemini, Microsoft Copilot, Claude, Perplexity.

What does an AI ethics task force look like?

An AI ethics task force should focus on the following elements of AI use:

  • Education and awareness:
    • Communicate clearly with people (externally and internally) about the organizational benefits and challenges to be transparent from the start.
    • Understand your purposes for using AI and define ethical boundaries.
  • Transparency:
    • Be honest (internally and externally) about how AI is being used.
    • Be clear with stakeholders about what data you are collecting, how that data is being used, and what benefits are realized because of it.
    • Organizations must understand how AI makes decisions and translate that understanding into something digestible for stakeholders. In other words, make it explainable.
  • Inclusivity
    • Make sure AI is representative of society by including women and people of color in the process of working on AI.
    • Control for bias by training algorithms to reduce things like racial and gender biases.
  • Follow regulations
    • Create an ethics council that evaluates each use case for ethical concerns.
    • AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

What kinds of ethical concerns should foundations pay attention to, as it relates to AI use? And, are there examples of useful policies that people can benefit from?

The ethical concerns are multiple, and I certainly won’t be able to cover them all. For more in depth information and best practices on this topic, refer to Foundant’s webinar on AI Guardrails.

Examples:

First, is a category relating to the organizations creating AI and the products. To what extent are they using energy and affecting the climate? Who are these organizations? What are their general priorities? Do they have policies in place you can align with? And, so on.

Next, there are concerns relating to the product data. When it comes to submitting data to these organizations, how is that data used to train their models? What is their policy about data?

Then, there is the information you receive from the models. Is it appropriate to use it and appropriate to trust it? To what extent should you rely on it? Is it the starter or the final answer in your workflow? How are you thinking about that? And then, of course, how might this impact your beneficiaries? What awareness do you owe to the nonprofits about how you’re using these tools? Again, questions worth asking and answering.

Finally, foundations should consider in their day-to-day use of AI relative to privacy/data collection, algorithm bias, and accountability.

Many artificial intelligence models are developed by training on large datasets. The data can be misused or accessed without authorization. Best practices for foundations mitigating this risk include:

  • · Avoiding entering sensitive personal or financial data into AI systems
  • Conducting regular risk assessments
  • Collecting only the minimum necessary data
  • Confirming and documenting use consent
  • Encrypting data during storage and transmission
  • Anonymizing or pseudonymizing data where possible
  • Limiting access to sensitive data and reporting on data collection and storage practices.

Algorithm bias: AI systems can be biased, producing discriminatory and unjust outcomes pertaining to hiring, lending, law enforcement, health care, and other important aspects of modern life. These biases typically arise from the training data used. If the training data contains historical prejudices or lacks representation from diverse groups, then the AI system’s output is likely to reflect and perpetuate those biases. Best practices for preventing algorithm bias include using diverse and representative data, implementing mathematical processes to detect and mitigate biases, developing transparent, explainable algorithms, adhering to fair, ethical standards, and engaging in ongoing learning.

Accountability: this concern stems partly from the lack of transparency in how AI systems are built. Many AI systems, especially those that use deep learning, operate as “black boxes” for decision-making. AI decisions are frequently the result of complex interactions with algorithms and data, making it difficult to attribute responsibility. Best practices to ensure accountability include following ethical design principles that prioritize accountability, documenting the responsibilities of all stakeholders, and ensuring that system design includes meaningful human oversight.

We are seeking AI tools that ingest handwritten notes (from site visits), create one-pagers of learnings, and then publish that one pager to the team. Do you have any insight into tools that do this well?

There are key technological components that a software would need to possess in order to have these capabilities. These include computer vision (to analyze the visual aspects of handwritten text), deep learning (neural networks for pattern recognition), and natural language processing (to understand context and improve accuracy in text interpretation).

Example of software that fits these needs includes:

  • Mazaal: simple pre-trained AI model that offers full and simple AI handwriting recognition inside its AI automation platform – free to use.
    • Specialized in recognizing handwritten text in images
    • High accuracy in converting handwritten documents to digital formats
    • Supports 300+ languages
    • Further automation feature to handle 100s of images.
  • Google: brings the power of its vast AI capabilities to handwriting recognition.
    • Seamless integration with Google’s ecosystem
    • High accuracy rates for diverse handwriting styles
    • Developer-friendly with extensive documentation
  • Instabase Handwriting Recognition:
    • Designed for enterprise-level document processing
    • Integrates with existing business workflows
    • Offers customization options

How would a foundation use AI most effectively for place-based philanthropy?

Place-based philanthropy involves significant community engagement. AI can be used to increase these engagement efforts and create community assessments. AI can process and analyze large datasets including social media, surveys, and user interactions. This data analysis can provide you with valuable insights into member behavior, preferences, and trends, helping you tailor engagement strategies accordingly.

Humanized chatbots and virtual assistants can provide real-time support to community members, promptly addressing their queries and concerns. With a well-programmed and maintained chatbot, you can easily improve your overall community support experience and ensure members feel heard. AI even helps identify new online community members based on their interests and online behavior, enabling you to easily recruit for panels or focus groups.

An example software that may fit these needs is FranklyAI. This platform actively listens by conducting user-led discussions. It’s much more conversational than the standard chatbot as it is programmed to the needs of your community and consultation.

How is AI being used to code data? For example, would having AI analyze and categorize set grants into specific focus areas be a recommended use case?

That would be an excellent use case. AI code generation uses large language models (LLMs) trained on huge datasets, using them to learn patterns between human input and code output, so when you give them a prompt, they predict the most likely, most useful code in return. Once the pattern is predicted, the model builds the code token by token. Modern models use attention mechanisms to focus on the most relevant parts of your prompt, making output more accurate and context aware. Software solutions that can assist in data coding include GitHub Copilot, Cursor, Codeium, and Bolt.new.

We, as a funder, are receiving far too many requests for funding, and do not trust AI to analyze the nuance of our grant applications to find a fit. Can you speak to these concerns? All these AI webinars are super supportive of what it can do, but it all sounds like it will remove people from the nonprofit sector, while we are already facing extensive competition in the job market.

It is correct that AI lacks the nuance and emotional intelligence of humans. AI should not be used to replace human judgement; it should be used to relieve the burden of administrative tasks. Final decisions should always be made by humans. As mentioned in the first question above, in some cases AI automates jobs depending on the industry. There may also be new jobs emerging that will be created by AI. Look back in history. What emerging technologies “took away” jobs? And, as a result of those same technologies, what new jobs were created?

Is it worth buying the “premium” of any AI platforms?

It would depend on your needs. Free tools are a great way to explore AI with no upfront cost. They’re best for casual users, students, or small projects. But when time, scale, or output quality matters, paid AI tools deliver serious value. Whether you’re designing brand assets, writing content, or building virtual environments, premium features often save more time and offer better results.

As AI tools become increasingly integral to various industries, determining when to invest in a premium plan is a decision that requires careful consideration. AI consultants recommend using a “4P Framework” to evaluate whether a paid AI tool is truly worth the investment: Precision, Privacy, Productivity, and Profit. This helps professionals assess the true value of AI tools and their impact on workflow, compliance, and revenue generation.

How do you ensure confidentiality of content you provide AI to assess/report on? How do you maintain confidentiality of the applicant’s information if you’re dropping the proposals into AI?

Best practices for ensuring confidentiality in AI content include:

  • · Conducting risk assessments
  • Limiting data collection
  • Seeking and confirming consent
  • Following security best practices
  • Providing more protection for data from sensitive domains
  • Reporting on data collection and storage

Always read security measures for software and never input personal info into AI software. Create a fake scenario if necessary so you can strictly analyze the data without compromising personal information. As always, quality check all data before finalizing.

Are specialized AI tools like off-the-shelf products, or do they generally require some level of on-going integration? Thinking about some software upgrades we’ve done in the past that have dragged on for years trying to get the new system to properly carry out business processes.

Custom AI involves designing machine learning models, algorithms, or NLP/computer vision systems tailored specifically to your unique business needs. Because they need to be trained on your data, these tools may require 3-9 months of integration. Custom AI is ideal for enterprises with complex workflows, regulated industries (finance, healthcare, logistics), and businesses seeking long-term innovation and IP ownership. Off-the-shelf options are ideal for SMEs with limited resources, fast prototyping and automation, and standardized workflows. In short:

  • · Choose Custom AI if:
    • You have access to proprietary data
    • You want to own your IP
    • You’re in a competitive or regulated industry
    • Long-term ROI > short-term savings
  • · Choose Off-the-Shelf AI if:
    • You’re prototyping or testing AI use cases
    • You need instant functionality
    • Your use case is general (e.g., chatbots, CRM automation)
    • You lack technical resources

What are the ethics of using AI to review grant applications? Does anyone have policies and formal guidelines around this? Are there examples of AI policies you would recommend using as a starting point?

Ethical concerns surrounding AI include accountability measures, algorithm bias/fairness, and privacy concerns, as discussed earlier. There is no comprehensive, one-fits-all policy for AI use, but there are several guidelines you can follow. Foundant’s blog on AI provides guidelines on how to use AI safely as a nonprofit. Project Evident’s “Responsible AI Adoption in Philanthropy” also provides a framework on how AI should be used in philanthropy.

Even if you tell an AI tool not to train further on the data that you give it, wouldn’t the company that owns the AI tool still technically own the data and do whatever they want with it such as sell it?

Ultimately, you own your data. If you provide data to an AI tool, you typically retain ownership of that data. However, the company may have usage rights depending on the tool’s terms of service or privacy policy. This means they might gain certain rights to use, store, or analyze your data. If you tell the AI not to train on your data (and the company honors that), it should not use your data to improve its models. However, that doesn’t necessarily prevent other uses, such as storing the data temporarily for processing, using it to provide services to you, and sharing it with third parties if you’ve agreed to that in the terms. Whether a company can sell your data depends entirely on the agreement you accepted before using the tool. Reputable companies typically don’t sell personal data, especially if it’s sensitive or identifiable, but some services may sell aggregated or anonymized data, which is harder to trace back to individuals. Best practices to prevent unwanted data collection include reading the privacy policy and terms of service carefully as well as looking for phrases like ,“We do not sell your data,” “Your data will not be used to train our models,” and/or “You can opt out of data collection.” If you’re using AI tools for work, your company might also have enterprise agreements that offer stronger protections.

Who is policing AI if data usage doesn’t comply?

There is no single federal law that comprehensively regulates AI or its use of data. Instead, oversight is fragmented across federal agencies including the Federal Trade Commission, Department of Commerce, and Center for AI Standards and Innovation. Congress has passed targeted laws that criminalize non-consensual AI-generated intimate imagery and debated about broader federal AI regulation. States like California, Illinois, Montana, and Tennessee have passed their own AI and privacy laws. This creates a patchwork of regulations, which can be confusing for companies and consumers alike. Globally, the AI Act classifies AI systems by risk level and imposes strict requirements on high-risk systems.

If we don’t want to feed sensitive application details into a public AI and we can’t afford to create our own LLM, is there a way to use AI that doesn’t put those application details out into the world?

You can use a private or on-premises AI solution, such as a self-hosted model. You can run open-source models like LLaMA, Mistral, or GPT-J on your own servers or cloud infrastructure. This keeps all data within your control and avoids sending anything to external providers. It requires some technical setup, but there are managed services that simplify this. Many cloud providers offer enterprise-grade AI tools that do not train on your data, do not store your inputs, and comply with privacy regulations. Another option is to use Retrieval-Augmented Generation (RAG) If you want to use AI for internal knowledge or applications you can combine an LLM with a private document store (like a vector database). This model retrieves relevant info from your internal data without ever training on it. You can also use local AI assistants like LM Studio, Ollama, or PrivateGPT that let you run models locally on your machine. Finally, check “No Data Retention” policies when using public AI tools.

If I am interested in training myself and my staff on how to use AI more effectively in our philanthropic work, are there courses or programs that you recommend?

Of course! Here are some options for courses/programs.

  • · Unlocking AI for Nonprofits – NetHope & Microsoft
    • Format: Free, self-paced online courses
    • Who it’s for: Beginners to advanced users
    • Platform: Kaya
  • PLAI – AI Courses for Nonprofits, NGOs, and Social Sector Professionals
    • Format: Immersive workshops, coaching, and masterclasses
    • Audience: From volunteers to C-suite leaders
    • Website: PLAI Learning H
  • · NTEN – AI for Nonprofit Decision-Makers
    • Format: Online course (~60 minutes of video content)
    • Website: NTEN Course Page
  • · FormusPro – AI Academy for Nonprofits
    • Format: Free training sessions and on-demand classes
    • Audience: All staff levels, including leadership
  • · Microsoft Learn – AI Skills for Nonprofits
    • Format: Beginner-friendly learning path
    • Focus: Using Microsoft 365 Copilot across Word, Excel, PowerPoint, Teams
    • Website: Microsoft Learn
Nathan Chappell headshot

“Creating Policies That Align Technology with Your Mission”

On September 4th, 2025, we hosted the second webinar in our series on AI for funders. The webinar was facilitated by Darian Rodriguez Heyman – bestselling author of the newly released book AI for Nonprofits and Founder & CEO of Helping People Help – and expert guest presenter Nathan Chappell, Chief AI Officer at Virtuous. This webinar discussed governance and acceptable use standards for AI.

What are the key criteria you recommend for the “sniff test” to determine whether an AI platform is legit?

Do you have recommendations for organization-wide AI trainings?

It’s a bit overwhelming, and it’s honestly why Nathan started doing a podcast, two years ago, called fundraising.ai, which is on Apple, YouTube, Spotify. There’s so much information about AI that becomes completely overwhelming, but not a lot of it has direct application to the nonprofit sector, and so that’s what he tries to do on our podcast, but the thing is to create a regiment where you’re just continuously learning, because if you don’t, and you wait, everything has changed. Find someone that you resonate with and then follow them. There are a few people Nathan recommends, like Ethan Mollick, who wrote a book called, Co-Intelligence. Co-Intelligence is a great book that kind of demystifies kind of what AI is and isn’t. The Coming Wave is a great depiction on the future and what’s at stake. Also, Allie K. Miller is a balanced individual that, even though she doesn’t focus on the nonprofit sector, her work is very aligned. She’s highly responsible in her approach to using AI, and essentially how to use AI to unlock the potential of your organizations.

Possible to address how to enforce a policy, e.g., if staff are caught breaking policy guidelines?

We have never used AI. Is the free version of ChatGPT adequate? Is that the best option to start with?

The reality is, if you’re not paying for the product, you are the product. It’s important to remember that. The free tools, when you’re paying for a product, what you’re gaining is usually access to a little bit better, faster features. But mainly you’re gaining access to, security. In a paid version of Chat, or Gemini, or Anthropic Cloud, you’re getting more access to ensuring that you can turn on and off your privacy settings so that you’re not training and retaining or that whoever you’re using is not training new models based on your data. It’s important to understand those privacy settings. They’re not super complicated to turn on or off. If in doubt, don’t put any confidential information into a large language model that you’re not aware of. If you’re not familiar with the privacy settings for any group or any product, then don’t do it. Just err on the side of caution there. But if the stakes are very low and you’re not using confidential information, free products are great. At the end of the day, it’s only about $20 a month. Then there’s, like, almost no excuse to affording $20 a month for you and members of your team.

What do you think about putting confidential information into a paid AI tool (ie Enterprise or Team accounts of ChatGPT)?

I’m on board, but some members of my organization are deeply concerned about the environmental impacts of AI. How would you recommend addressing these concerns in order to create a culture of innovation with AI?

Can you speak at all to the environmental cost of AI? I’m not sure if what I’ve read about its energy usage is true or not. If it is accurate, how should we approach that in our governance policies?

With so many new startups entering the market, such as Virtuous, how can we be confident that the privacy and security commitments they make are both accurate and truly effective in protecting our data?

We are in the middle of strategic planning and preparing for a phase focused on how we enable our strategy through the use of AI tools/agenda/etc. What is your best advice on how to maximize the planning process?

I would like to look into my team having an AI tool. Currently we have Microsofts CoPilot included in our subscription, but its terrible lol. What are others using?

As the Impact Director for a community foundation, I lead the nonprofit leadership series. We offered a 3-part series this spring for our nonprofits to attend and have had many requests to continue the series. What would the main themes be for a full series?

And what about the environmental concerns with AI? What if our constituents are concerned about the ecological impact of massiver server farms?

Is anyone using AI to solve the energy issue?

AI is rising in the philanthropic sector, and there is uncertainty that comes with this change. We hope that this Q&A provides more clarity on the topic of AI for funders. Consider using AI to help you streamline processes and reduce administrative burden, so you can have your time back to focus on your mission and impact. Utilize these insights and tips to revolutionize your organization with AI.

Start a conversation to learn how Foundant can make your funding process easier.