Our new Technology in the Public Interest strategy advances innovation in the governance of AI with evaluation, auditing, and accountability.
For the better part of a decade, the MacArthur Foundation has been investing in organizations and networks seeking to understand and address the social impacts of artificial intelligence (AI). When the Technology in the Public Interest (TPI) program established its first field support strategy in 2020, data showed that individuals or organizations doing the work did not identify as a field. That has changed significantly.
Today we see an emerging, if fragile, ecosystem of actors working to ensure that public interest considerations are centered in the design, use, and governance of AI. The complexity and urgency of ensuring that AI technologies benefit people and the planet have only grown since we started this work.
Yet right now, the vision for our future with AI is largely driven by a handful of powerful technology companies. Leaders of these companies make bold claims and promises about AI, often moving between the extremes of AI being a panacea for today’s social and planetary challenges or AI threatening the future of humanity. We are told that countries are in a supposed “race” to build the most advanced AI and that it will define geopolitical “winners” and “losers.” This is a race that does not seem to have an end point—but does function to grow company bottom lines.
Auditing and Evaluations to Build Trust and Mitigate Harms
We need not cede our future with AI to this vision; yet we should understand the motives of AI and technology companies through the lens of power. By treating their vision of AI as inevitable, leading AI companies are amalgamating power at a scale that is rarely seen in history. It is power that must be checked, while we simultaneously build alternative pathways for AI that center humanity.
There are basic steps needed to ensure that AI systems are widely beneficial and that harm is mitigated.
There will not be a turnkey solution to governing AI, but we do believe that there are basic steps needed to ensure that AI systems are widely beneficial and that harm is mitigated. Crucially, we cannot effectively govern AI if there are not shared approaches to evaluating and auditing AI systems that center public interest considerations.
AI evaluation and auditing matter if we want AI systems that are accurate, trustworthy, safe, rights-respecting, and secure—necessary qualities for technology impacting sectors that affect our daily lives such as healthcare, education, finance, and national security.
AI evaluations broadly assess a system’s performance for safety and risk, and AI audits rigorously review systems—usually against defined standards or regulations—to verify compliance and accountability. AI companies must play a role in evaluations and audits. However, given their incentives and behaviors to date, we cannot lean on them to establish and maintain AI evaluation and auditing approaches.
A Technology in the Public Interest Strategy for Accountability
These insights underpin the overarching goal of TPI’s new five-year strategy that will be the core of our work:
Strengthen democratic oversight and innovation in the governance of AI through the development, adoption, and enforcement of evaluation, auditing, and accountability mechanisms that center public interest considerations.
Our overarching goal informs three primary approaches to our grantmaking:
- Strengthen the research base for public interest-focused AI evaluations, auditing, and accountability in ways that inform policy, practice, and public knowledge.
- Advance the development, implementation, and enforcement of AI laws, policies, and regulations that center public interest considerations.
- Support networks that ensure AI deployment in high-stakes sectors prioritizes public interest considerations.
Our grantmaking will support efforts that focus on AI evaluation and auditing that seek to prevent and mitigate real-world harms.
Our grantmaking will support efforts that focus on AI evaluation and auditing that seek to prevent and mitigate real-world harms. We will also prioritize developing strong methods and approaches to AI evaluation and auditing that center communities, keeping MacArthur’s values at the forefront. We believe that this will help ensure that the benefits of AI are more equitably shared.
The rapid development and rollout of new and emerging technology is impacting society, the economy, our politics, and international affairs—offering both opportunities and challenges. Mindful of this, in addition to our core grantmaking, TPI will engage in exploratory work. Areas of initial focus include the intersection of AI, national security, and geopolitics, as well as work focused on AI and linguistic diversity. Finally, across our grantmaking, we will contribute to the Foundation’s commitment to Humanity AI and Current AI.
We think that AI can help us solve challenges and benefit society, but only if strong guardrails and democratic oversight ensure that its development and use are aligned with the public interest.
