Use of Artificial Intelligence

This version of the policy is for external use and omits internal confidential information.


Artificial Intelligence (“AI”) is evolving quickly. The use of AI Tools can be found across a wide spectrum of activities. These tools can be useful for the Foundation in our work and for our grantees and contractors. We want to take advantage of the benefits of AI but do so responsibly, ethically, and in keeping with our values.

This Policy applies to the use of AI Tools both internally by Staff and, as noted below, externally by grantees, vendors, and consultants when engaging with the Foundation. The Policy rests on the principles described below and the required steps found under the principle at issue. It should be read in conjunction with other Foundation policies, including the Technology Policy, the Confidentiality Policy, the Code of Conduct, and other applicable policies.

In recognition of the rapidity of change in the field, including that AI will be increasingly embedded in other platforms and tools, this Policy will be reviewed no less frequently than annually.

A glossary of terms is attached as Appendix 1 which will be updated as necessary. The AI Tool checklist, described below, is attached as Appendix 2,


It is the policy of the Foundation to permit the use of AI Tools to benefit the Foundation in its work through efficiencies, innovation, discovery, and productivity when it can do so consistent with the principles below and in accordance with the procedures herein.

Overarching Approach and Philosophy

Given the evolving and fast-moving developments in the AI field and the integration of AI Tools across sectors, the Foundation wishes to engage with the opportunities of AI through a principled experimentation approach as follows: 

  • AI Tools embedded in productivity programs or platforms in use or put into use Foundation-wide that require a user to elect to trigger the use should not be used unless approved by the IT Department in consultation with the Legal Department.
  • In considering possible uses of AI and benefits to their work, a department or program should consult the principles described below as reflected in the AI Checklist attached as Appendix 2 and make its best judgment whether the proposed use and benefits are consistent with the principles. Terms of service must be reviewed by one of the lawyers in the Legal Department, and a member of the IT Department should be consulted prior to use.
  • An AI Tool that is expected to be used beyond a single department or program will be considered a Material AI Tool and its use must be recommended by the AI Recommendation and Use Committee (“AIRUC”) as described below.
  • If the determination is made by the department or program that the proposed use is consistent with the principles and is not a Material AI Tool, the department/program should notify AIRUC (and provide the checklist) and may proceed with the proposed use. 
  • The use of the AI Tool should be evaluated periodically to assess its intended benefits and identify any issues that may suggest further use, modification of the use, or cessation of the use. Such learnings should be communicated to AIRUC.
  • The Foundation should be alert to opportunities to allow the use of AI Tools to reduce burdens to grantees in the grant process, including applications and reporting.

Principles for Internal Use of AI

The following principles and required steps will guide the use of AI Tools at the Foundation:

Protect Confidential Information and Maintain and Uphold Security and Privacy

  • Understand that information from AI Tools may not be accurate, may be offensive, may reflect racial, gender, or other biases because of where the underlying information is drawn from and may not be consistent with our values.
  • Use caution when using AI Tools that might limit fairness and inclusion, such as in recruiting processes or in selecting vendors or investment managers.
  • Be aware that AI Tools can result in a significant carbon footprint connected to the electricity and computing resources needed to run the servers powering AI modeling tools.

Be Transparent and Disclose Use

  • If Staff use an AI Tool for Foundation work, you should disclose the fact that the work product was based in part on information gleaned from an AI Tool, identify the tool, and how the information was generated.
  • The Foundation should consider disclosing to grantees and others if data provided by them will be used in an AI Tool, either in the grant agreement or on our website.
  • The Foundation should disclose to prospective employees if the Foundation is using an AI Tool to assess personal information submitted by prospective employees.

Do Not Use AI Content Verbatim

  • Do not rely exclusively on an AI Tool generated product if you are producing a written document for the Foundation or public consumption. Check your work and sources.
  • Generative AI tools (such as ChatGPT) can be useful for ideation or brainstorming or to create generic output that you can review, edit, and modify for appropriate use. AI Tools should never replace you as the author of original content or serve as your final work product. Examples of responsible use include:
    • Creating a first draft of an e-mail that you edit before sending.
    • Asking a question to elicit a response to spark your own thinking on a topic.
    • Asking a question that might identify gaps in your original content

Staff Input and Engagement is Encouraged and Staff Should Share Best Practices and Results

  • Staff are encouraged to participate in discussions regarding possible AI Tools and to provide feedback and ideas on how AI Tools can be leveraged to improve our work and the organization.
  • If you find (or imagine) a use of AI Tools that you think would be particularly useful for the Foundation in our work, please let AIRUC know with a brief explanation of the use and its benefit

Use Reputable AI Tools

  • New AI Tools emerge daily, and existing ones evolve. AIRUC maintains a list of permitted tools, and as principled experimentation ensues, this list will modified by the Committee.
  • Do not use AI image generation or AI visual editing tools without prior approval of AIRUC and the Legal Department.

Respect Copyrights and Other Intellectual Property

  • Do not use AI tools for generic outputs that might result in raising questions of copyright because they are similar to or based on known works of art or other copyrightable materials (e.g., don’t ask the AI Tools to make a modification of a known work, author or character, real or imaginary, such as “make a picture of Barbie as an astronaut”). Any questions regarding the use of AI Tools that might raise copyright issues should be directed to the Legal Department.

Applications to Consultants, Vendors, and Grantees

The use of AI Tools by third parties engaged by the Foundation as part of our business operations, including consultants, vendors, and grantees, for Foundation work can have implications for and potential liability to the Foundation. To balance the business needs of third parties while protecting the Foundation’s interests, agreements with third parties (except for general operating support grants) should include the following terms as applicable to the circumstances and identity of the third party:

  • Disclosure to the Foundation of AI Tools used to generate work product provided to the Foundation or produced with funding provided by the Foundation for the specific work product and made publicly available.
  • Representations that work product produced under the applicable agreement through an AI Tool and provided to the Foundation or made publicly available does not violate copyright or other intellectual property rights.
  • To the extent the Foundation may in the future make grants to support the production of an AI Tool, the grant agreement will include specific provisions consistent with the principles of this Policy.

Establishment of AI Tool Recommendation and Use Committee (“AIRUC”)

There is established an AI Tool Recommendation and Use Committee (“AIRUC”) to be appointed by the President. AIRUC shall be responsible for reviewing proposed uses of AI Tools by Staff that are not contained in software already authorized for use by Staff (such as Zoom or Teams) or that are Material AI Tools and formulating a recommendation to the President for the permitted use. In considering its recommendation, AIRUC shall weigh the benefits provided by the proposed use, including productivity enhancements, innovation, and efficiencies in light of potential risks and considering the principles described in this Policy.

AIRUC will maintain a use library of AI Tools and learnings associated with such use. A department or program using an AI Tool should provide learnings to the AIRCUC based on their experience.

AIRUC will develop a charter to describe more fully its purposes and procedures.

Appendix 1 - Glossary of Terms

  • AI (or artificial intelligence) refers to a constellation of computational technologies (e.g., machine learning, natural language processing, and deep learning) that make predictions based on data inputs and computing power. While often ascribed human-like characteristics, AI, as defined, does not have awareness or consciousness like human do.
  • AI Tools means any platform, mechanism, bot, or machine learning idea generator that is triggered by a prompt, using an AI technology to formulate information or a response.
  • Generative AI refers to an advanced form of computation that requires enormous amounts of data and computing power to create a model that is able to generate text, images, audio, code, or other responses to prompts.
  • Sensitive information is:
    • Personal data, including confidential information about applicants, employees, grantees, or contractors.
    • Legal documents, including information that is subject to attorney-client privilege.
    • Third party confidential information, including information that is subject to a non-disclosure agreement or an expectation of confidentiality, such as grantee financial information and grantee strategies.
    • Internal process information or work product, including grant summaries, initiative strategies, and external and internal evaluations.
    • Any other confidential information, including board and committee records.

Appendix 2 - AI Checklist

The AI Tool will only be used by one department or program.

The AI Tool does not require the use of any proprietary or confidential information as defined in the Foundation’s Confidentiality Policy.

The AI Tool does not have a history of producing biased or inaccurate information.

Use of the AI Tool will be inconsistent with the Foundation’s values.

The terms of service have been reviewed by the Legal Department.

The IT Department has been consulted.

The AI Tool has been approved for use by the AIRUC if applicable.

The use of the AI Tools is not likely to result in a copyright violation as determined in consultation with the Legal Department.

All Policies Right Arrow