Use of Artificial Intelligence

This version of the policy is for external use and omits internal confidential information.

Why We Post Our Policy on Use of Artificial Intelligence

The MacArthur Foundation values Integrity and Learning and in that spirit has made a range of our policies publicly available, including the Policy on the Use of Artificial Intelligence (AI) that was adopted two years ago. Our goal in sharing the initial Policy at an early stage was that it could benefit the field and others who were wrestling with the appropriate use of AI tools. We received feedback that the initial Policy was helpful to some organizations.

We recently revised our Policy on the Use of AI. We recognize this is a fast-evolving field and many organizations have already adopted or are in the process of considering their own policies.

As a learning organization and in the spirit of transparency, we share our revised Policy and will share future updates publicly in case it might prove useful to other organizations. Some organizations may take different approaches to these complicated issues or may be further ahead of MacArthur in their use of AI tools. We look forward to working with peer organizations in ensuring the ethical and appropriate use of these powerful tools.

AI is evolving quickly and manifests in a range of cloud-based applications, tools, and platforms (collectively, AI tools). The purpose of this Policy is to guide the ethical and responsible use of AI tools by Staff as noted below, and externally by grantees, vendors, and consultants when engaging with the Foundation. The Foundation wishes to harness and responsibly use AI tools to improve creativity, efficiency, and productivity in furtherance of our mission while recognizing its limitations.

This Policy should be read in conjunction with other Foundation policies, including the Technology Policy, the Confidentiality Policy, the Code of Conduct, and other applicable policies.

In recognition of the constantly evolving nature of technology and AI and related laws and best practices, this Policy will be reviewed regularly.

The following Appendices are part of the Policy: Appendix 1, a glossary of terms and data use; Appendix 2 approved AI Tools; Appendix 3, AI Tool checklist. Additional guidance will be issued as needed.

Use of Artificial Intelligence

This version of the policy is for external use and omits internal confidential information.

It is the policy of the Foundation to permit the use of authorized AI tools to benefit the Foundation in its work through efficiencies, innovation, discovery, and productivity when it can do so consistent with the principles below and in accordance with the procedures herein.

Overarching Approach and Philosophy

Given the evolving and fast-moving developments in the AI field and the integration of AI tools across sectors, the Foundation wishes to engage with the opportunities of AI through a principled experimentation approach consistent with this Policy as follows:

  • The Foundation will iterate as necessary to respond to technological, legal, and other developments to ensure use of AI tool is consistent with the Foundation’s values.
  • The use of any AI Tool will be evaluated periodically to assess its intended benefits and identify any issues that may suggest further use, modification of the use, or cessation of the use.
  • The Foundation will be alert to opportunities to allow the use of AI tools to increase efficiencies and productivity throughout its operations and to reduce burdens to grantees in the grant process, including applications and reporting.

Oversight

An AI Tool Recommendation and Use Committee (AIRUC) with members appointed by the President is hereby established with the following guidance:

  • AIRUC shall be responsible for reviewing and approving proposed uses of AI tools.
  • In considering its recommendation, AIRUC shall weigh the benefits provided by the proposed use, including productivity enhancements, innovation, and efficiencies in light of potential risks and considering the principles described in this Policy.
  • AIRUC will maintain a use library of AI tools and learnings.
  • AIRUC will develop a charter to describe more fully its purposes and procedures.

Principles for Internal Use of AI

The following principles and required steps will guide the use of AI Tools at the Foundation:

Use Only Permitted AI Tools

Authorized AI Tools

Appendix 2 contains the List of Authorized AI Tools and will be reviewed regularly by AIRUC and updated as necessary. Any program or department wishing to use an AI tool that has not previously been approved may suggest the use to the AIRUC together with an analysis of the AI checklist in Appendix 3. The AIRUC will consider such approval based on the principles and provisions of this Policy and, if approved, will be added to Appendix 2.

Review of Terms of Service

Many applications pose intellectual property, privacy, and security risks. Terms of service must be reviewed by one of the lawyers in the Legal Department and a member of the IT Department.

When to Download

Approved tools may be downloaded for use, provided you are utilizing a Foundation-provided license with multi-factor authentication enabled.

Registering for Tools

Only Register (if applicable) and use tools under your Foundation or business email address (i.e., do not conduct foundation business using a personal email address or account).

Embedded AI Tools

AI tools embedded in productivity programs or platforms in use or put into use Foundation-wide, such as Zoom, Teams, Workday or similar technology platforms, that require a user to elect to trigger the use (such as AI notetaking and similar tools) should not be used unless the embedded tool (such as AI notetaking) has been approved by the IT Department in consultation with the Legal Department and the requirements for such use have been developed. In that regard using AI functionality embedded in the following approved platforms (e.g., Zoom, Microsoft Teams) may be permitted for notetaking, transcription, and summary purposes only under the following conditions:

  • The host of the internal MacArthur meeting that wishes to use the functionality must follow the procedures approved by the Foundation for the recording of Zoom or Teams calls which requires, among other things, notice in advance that such tool(s) will be used for the meeting, and a reminder at the meeting that such embedded tool will be used allowing in both cases for participants to decline to participate in the meeting or request that such tool not be used.
  • The approved embedded tools, including notetaking, recording or transcription should not be used in meetings/Zoom/Teams in which it is anticipated that Legal Counsel will be providing legal advice.
  • All summaries, notes, transcripts, or summaries will not be distributed outside the Foundation and will be stored only temporarily in accordance with the Foundation’s Record Information Management Policy.
External Meetings Hosted by MacArthur

Staff should not trigger the use of embedded AI tools such as note taking, summaries, or transcripts without notice to and the consent of other participants to the meeting. If consent is unclear, the tool should not be used except in consultation with the Legal Department.

External Meetings Hosted by Third Parties

Staff should be aware of instances in which a third-party host has triggered the use of embedded AI tools for a meeting in which MacArthur Staff are participating. Depending on the nature and sensitivity of the meeting (and other circumstances such as the number of attendees), Staff should request that the host disable the use by requesting the following or similar words: “Under MacArthur’s Policy on the use of Artificial Intelligence in meetings, I request that the [AI summary, note taker] be disabled.” This would especially be the case where the subject matter may be sensitive, involve confidential information or where it is not clear how the resulting product may be used. If the host will not abide by the request, the Staff member should consider leaving the meeting.

Protect Confidential Information and Maintain and Uphold Security and Privacy

Do not include (or upload) any confidential, proprietary, personal, or sensitive information of the Foundation, grantees, vendors, investees, or others with whom we engage in any queries, prompts, or inputs to AI tools except as permitted by this Policy.

Do not download any AI technology to a Foundation device (laptop, phone) or the Foundation’s network unless it is an AI Approved tool or with documented permission of the IT Department.

Appendix 1 describes categories of data based on its sensitivity and confidentiality and the permitted use of such data in the AI Tool indicated.

Approved Tools

If using an approved tool, follow all security and privacy safeguards required by the Foundation.

Prohibited Tools

Refrain from using these tools in the course of Foundation-related work, regardless of whether data is public, non-sensitive, or confidential.

Public Tools (neither prohibited, nor approved for enterprise use)

Only publicly available data may be used in these tools unless AIRUC has reviewed and authorized specific tools as identified in Appendix 2.

Be Transparent and Disclose Use

If Staff use an AI Tool for Foundation work, you should disclose the fact that the work product was based in part on information gleaned from an AI tool, identify the tool, and how the information was generated.

AI-generated content does not need to be disclosed for everyday minor tasks (e.g., grammar suggestions, brainstorming) when using tools that are approved for use at the Foundation.

Disclosure is required when:

  • AI is a primary contributor to the final written work product or responsible for a majority of the final content.
  • AI is used to generate media, including images, audio, or video.

How to disclose:

  • If AI was a primary contributor to written material, include a general statement in the document, e.g.: “This [product, report] was generated using [AI Tool] and reviewed for accuracy and completeness.”
  • In the case of AI-generated media, a watermark must be added to indicate, “Generated by AI.”

Understand Your Responsibility

You are responsible for the accuracy of content generated by AI and it should be used consistent with our values.

Understand:

  • Information from AI tools may not be accurate, may be offensive, may reflect racial, gender, or other biases because of where the underlying information is drawn from and may not be consistent with our values.
  • Use caution when using AI tools that might limit fairness and inclusion, such as in recruiting processes or in selecting vendors or investment managers.
  • Ensure AI usage serves essential functions or provides tangible benefits.
  • Be aware that AI tools can result in a significant carbon footprint connected to the electricity and computing resources needed to run the servers powering AI modeling tools.

Be Responsible:

  • Do not use AI Content verbatim and do not rely exclusively on an AI Tool generated product if you are producing a written document for the Foundation or public consumption and check your work and sources.
  • You are responsible for reviewing, fact-checking, and ensuring accuracy before using AI-generated outputs in Foundation work.
  • AI should assist, not replace, human authorship. Avoid relying upon AI-generated content without review or edits.
  • When using AI, cross-check outputs with trusted sources before distributing any information internally or externally.

Respect Copyrights and Other Intellectual Property

Do not use AI tools for generic outputs that might result in raising questions of copyright because they are similar to or based on known works of art or other copyrightable materials (e.g., do not ask the AI tools to make a modification of a known work, author, or character, real or imaginary, such as make a picture of Barbie as an astronaut).

Any questions regarding the use of AI tools that might raise copyright issues should be directed to the Legal Department.

Use Approved AI Tools in Ways that Increase Productivity

AI can be a helpful tool when used appropriately. AI use cases for the Foundation include:

  • Creating initial drafts of emails, reports, or summaries that are carefully reviewed and deeply edited.
  • Using AI to generate ideas, outlines, or refine existing concepts.
  • AI may assist in identifying trends or summarizing large datasets (adhering to guidance on personal/sensitive data in Section III(B) “Protecting Confidential Information and Maintaining and Upholding Security and Privacy”).
  • AI translation tools may be used for internal purposes only (not for legal documents or contracts).
  • AI tools embedded in approved platforms (e.g., Zoom, Teams) may be used for notetaking and summaries only as permitted by this Policy.

The following are NOT acceptable uses of AI:

  • Generating final work products without human review.
  • Using AI for hiring decisions or personnel evaluations.
  • Using AI for grantmaking decisions or evaluation.
  • Relying on AI for legal or financial guidance.
  • Modifying copyrighted materials such as known work, author, or character, real or imaginary.

Staff Engagement

Staff input and engagement is encouraged and staff should share best practices and results. Staff are encouraged to participate in discussions regarding possible AI tools and to provide feedback and ideas on how AI tools can be leveraged to improve our work and the organization.

If you find (or imagine) a use of AI tools that you think would be particularly useful for the Foundation in our work, please let AIRUC know with a brief explanation of the use and its benefits.

Applications to Consultants, Vendors, and Grantees

The use of AI tools by third parties engaged by the Foundation as part of our business operations, including consultants, vendors, and grantees, for Foundation work can have implications for and potential liability to the Foundation. To balance the business needs of third parties while protecting the Foundation’s interests, agreements with third parties (except for general operating support grants) should include the following terms as applicable to the circumstances and identity of the third party:

  • Disclosure to the Foundation of AI tools when AI is a primary contributor to the final work product provided to the Foundation or produced with funding provided by the Foundation for the specific work product and made publicly available or when AI is used to generate media, including images, audio, or video.
  • Representations that work product produced under the applicable agreement through an AI tool and provided to the Foundation or made publicly available does not violate copyright or other intellectual property rights.
  • To the extent the Foundation may in the future make grants to support the production of an AI tool, the grant agreement will include specific provisions consistent with the principles of this Policy.

Duty to Monitor and Report Issues with AI Tools

If you notice unexpected behavior, bias, or ethical concerns with an AI tool, report them to AIRU

Appendix 1 - Glossary of Terms

AI (or Artificial Intelligence)

Refers to a constellation of computational technologies (e.g., machine learning, natural language processing, and deep learning) that make predictions based on data inputs and computing power. While often ascribed human-like characteristics, AI, as defined, does not have awareness or consciousness like human do.

Generative AI

A type of artificial intelligence that enables machines/software to generate text, images, audio, code, or other responses to prompts.

Sensitive Information

Foundation information that should not be accessed broadly internally (e.g., performance reviews) and/or released externally (e.g., grantee financial information). Below are examples of sensitive information.

Personally Identifiable Information (“PII”)

Information that can be used to identify an individual, including Foundation applicants, employees, grantees, and contractors.

Types of Data

The Foundation classifies data into three categories for AI usage. This appendix is not shared publicly. Parties may contact Josh Mintz for additional information ([email protected]).

Public Data

Permitted. Can be used for AI development and analysis. Adhere to usage terms and assess ethical implications.

Possibly Sensitive Data

Permitted. Usable for approved AI applications that have an Enterprise license. Use audits or data loss monitoring and prevention software.

Sensitive or Confidential Data

Prohibited. Pre-approval by the General Counsel required.

Appendix 2- List of Authorized AI Tools

Given the speed of AI tools being developed and updated, the authorized tools may be updated frequently.

Contact AIRUC to request a new tool for review and approval according to our Tools Evaluation Checklist.

AIRUC will evaluate AI use by considering factors including but not limited to:

  • Does vendor commit not to use Foundation data to train its models?
  • Does vendor stand behind the accuracy of its deliverables?
  • Does vendor indemnify Foundation for IP/copyright infringement?
  • Does vendor commit to appropriate security standards ISRs?
  • Is Foundation able to control access?
  • Are logs auditable?
  • Will Foundation own outputs?
  • Are there any use restrictions applicable to the Foundation’s intended use?
  • Are there any usage policies that need to be read by users?

Details on permitted tools are not public. For more information contact Josh Mintz ([email protected]).

Approved Foundation AI Tools

May utilize public data or possibly sensitive data. Use of sensitive and/or confidential data is prohibited.

Public Tools Reviewed by AIRUC/IT

May utilize public data. Consult with IT before utilizing possibly sensitive data. Use of sensitive and/or confidential data is prohibited.

Prohibited Tools

The use of public data, possibly sensitive data, or sensitive and/or confidential data is not permitted.

Appendix 3 - AI Checklist

Does the proposed AI Tool provide additional benefits or opportunities not provided by existing approved AI Tools?

Does the proposed AI Tool have a policy to protect confidential and sensitive information/data that has been reviewed by the Foundation’s legal department?

Has the Foundation’s IT department reviewed the technological aspects of the AI Tool?

Does the proposed AI Tool have a history of generating results that may be useful and not producing biased or inaccurate information (Understanding that Staff are responsible for the accuracy of any results)?

Will the use of the AI Tool be inconsistent with the Foundation’s values (this might include the ownership of the Tool by companies or persons whose public statements and purposes are at odds with the Foundation or the AI Tool is not for the benefit of a broader public but benefits only one company or person)?

All Policies Right Arrow