Our Strategy
The overarching goal of Technology in the Public Interest is to strengthen democratic oversight and innovation in the governance of artificial intelligence (AI) through the development, adoption, and enforcement of evaluation, auditing, and accountability mechanisms that center public interest considerations.
Our strategy seeks to ensure that AI systems are governed in ways that uphold democratic values, such as transparency, accountability, and public engagement. We emphasize the need for meaningful oversight that measure how AI systems impact people, communities, and society. To that end, we believe that accountability mechanisms should be grounded in the experiences of communities most vulnerable to AI harms. By doing so, we can help ensure the benefits of AI are widely shared.
The goal is advanced through three interrelated grantmaking approaches:
- Strengthen the research base for public interest-focused AI evaluations, auditing, and accountability in ways that inform policy, practice, and public knowledge. AI evaluations broadly assess a system’s performance, examining factors such as societal impact, safety, and risk. And AI audits rigorously review systems, usually against defined standards or regulations, to verify compliance and accountability. This includes support for organizations that create and implement evaluation and audit methods and advance other types of accountability measures for AI systems.
- Advance the development, implementation, and enforcement of AI laws, policies, and regulations that center public interest considerations. This includes the development of policy and regulatory frameworks and analysis, capacity building, education, and new ideas for governing AI in the public interest.
- Support networks that ensure AI deployment in high-stakes sectors prioritize public interest considerations. AI tools are being rapidly deployed across socially consequential domains from healthcare to education and finance. We support field practitioners who work together to embed responsible practices, elevate security considerations, and center human rights at every state of implementation.
Because technology is not static, Technology in the Public Interest will also explore new and emerging issue areas. Grantmaking will contribute to the Foundation’s commitment to Humanity AI and Current AI.
TechEquity raises awareness about economic inequality resulting from the technology industry's products and practices, and advocates for change that ensures technology's evolution benefits everyone. Credit: TechEquity Collaborative
Why We Support This Work
We live in a world increasingly mediated by a range of technology and computational systems that impact nearly every facet of life. This is poised to accelerate as AI systems and technologies are further integrated into society.
While AI promises to bring numerous benefits to people and society, it is also introducing a range of risks and harms. AI systems deployed across sectors such as healthcare, finance, and education are often not community-centered and may exacerbate existing inequities rather than address them. Moreover, the race among a handful of powerful companies and governments to develop and deploy ever more powerful AI has introduced a major national security and foreign policy issue. This race for “AI dominance” risks eroding efforts to ensure AI systems are accurate, reliable, trustworthy, safe, secure, and respecting of human rights.
Beneath the veneer of new and emerging technologies are old stories about power and how it operates. The drive to integrate AI everywhere is amassing unprecedented power in the hands of a few companies that control the most advanced AI models. These companies are ultimately beholden to their investors and shareholders, not to the public. This means companies too often have an incentive to prioritize profit and growth over the public’s interest.
All consequential technologies need strong oversight. Given AI’s potential to transform society, it demands equally rigorous governance. The ability to evaluate and audit AI is a core element of strong AI governance. These practices build trust, ensure safety and security, and help guarantee human rights protections. That is why Technology in the Public Interest is deepening our research investments in this area, while also supporting policy analysts, advocates, and others working to ensure AI governance serves the public interest.
The focus on advancing democratic oversight and governance of AI is also foundational to MacArthur’s Big Bet, AI Opportunity, that seeks to expand who creates, uses, and benefits from AI.
Expected Outcomes
Technology in the Public Interest expects to demonstrate contributions to the following outcomes through our grantmaking:
- Researchers, advocates, and others advance shared AI evaluation and auditing practices that center public interest considerations, informing law, policy, regulation, and practice.
- AI governance is guided by laws, regulations, standards, and practices that safeguard the public interest, shaped through strategic and coordinated contributions from civil society.
- Expert networks focused on responsible AI adoption in consequential sectors, such as healthcare and education, are expanded and strengthened, advancing AI policies and practices that serve the public interest.
- Increased research and journalism on AI’s impacts both educate and inform the public and help guide policy, regulation, and practice.
Funding Priorities
Whenever possible, Technology in the Public Interest provides general operating support, or the closest equivalent, to organizations that typically contribute to multiple of our grantmaking approaches. We also maintain funding to explore emerging areas of importance at the intersection of technology and society. This includes advancing public interest considerations at the intersection of AI, national security, and geopolitics, and expanding linguistic diversity in AI so the benefits of AI can be widely shared.
In addition, Technology in the Public Interest collaborates with other MacArthur programs and philanthropic partners to align resources and efforts.
While we are not currently accepting unsolicited proposals, we welcome hearing about new ideas and perspectives.
Evaluation for Learning
Evaluation of our work is a critical tool for informing our decision making, leading to better results and more effective stewardship of resources. The Foundation develops customized evaluation designs for each of our programs based on the context, problem, opportunity, and approach to the work. Evaluation is not a one-time event. It is an ongoing process of collecting feedback and using that information to support our grantees and adjust our strategy.
Findings and analyses from evaluation activities are posted publicly as they become available.