Why is MacArthur running a competition for a single $100 million grant?
We set out to do something bold and different. Most foundation grants are closer to $100,000 than $100 million. By funding 100&Change at a level far above what is typical in philanthropy, we sought to address problems and support solutions that are radically different in scale, scope, and complexity.
We believe $100 million can enable real progress toward a meaningful and lasting solution to a critical problem of our time.
Increasingly, MacArthur is focused on “big bet” initiatives that strive for transformative change in areas of profound concern, such as climate change and criminal justice reform. But we do not know it all, and there are other significant issues. 100&Change is a way to encourage and support ideas from any field.
What makes 100&Change different from other philanthropic competitions?
What is unique about 100&Change is its focus on problems and their solutions, and the requirement that proposals address both. It is also unique because no single field or problem area was designated, unlike some prizes and challenges. And proposals from all sectors were encouraged. The openness and transparency of the application process is also distinctive.
Applicants know exactly what they are being scored on and every applicant will receive meaningful feedback on their proposals from the judges. The process will provide vital feedback—and useful public exposure—to applicants, even if they do not ultimately receive the grant.
How did MacArthur choose judges?
We considered three different models.
The first was a crowdsourcing model. We liked the idea of people proposing which problems to solve and having a crowd vote whether a proposal is meaningful or compelling. But we did not want 100&Change to turn into a popularity contest.
The second approach, was the specialists’ panel model, where we would define a field of work and then identify experts to evaluate applications. There was a sense, however, that experts in a certain field tend to struggle with new ideas that come from outside of their discipline.
What we realized is crowds provide a way to take more risks and innovate. And the wisdom of experts is important. So, we decided to create a crowd of wise experts. We referred to them as our “panel of wise heads.”
We ended up with an evaluation panel of judges that included 413 thinkers, visionaries, and experts in fields that included education, public health, impact investing, technology, the sciences, the arts, and human rights.
What criteria did the judges consider when evaluating proposals?
Rather than having our judges review submissions based on their field of expertise, we randomly assigned proposals and asked them to determine whether projects were meaningful, verifiable, feasible, and durable based on their broad knowledge. Each application was judged by a panel of five experts.
Meaningful is the goal of the competition: tackle a significant problem that would really matter.
The second was verifiable. We wanted to know, will the solution work? We wanted to mitigate against the risk of picking a proposal that was completely untested or untried. We perceived a gap in the philanthropic field, a need for funding to take tested ideas to scale. Having evidence a proposal worked – at least once, somewhere and on some scale, was important to us.
The third was feasible. When it comes to feasibility, the kinds of questions we wanted judges to consider were: Does the team have the right expertise, capacity, and skills to deliver the proposed solution? Do the budget and project plan line up with realistic costs and tasks?
The last criteria, durable, is the one that sets 100&Change apart. If we were focused on solving a problem, we didn’t want the solution to be temporary and transitory. We wanted whatever we chose to have a long-term impact.
How did the judges score proposals?
The judges scored criteria on a 1-5 scale. We did not want to disadvantage a proposal that was assigned to a judge who tended to give low scores or tip the scale in favor of a proposal that had a judge who tended to score high. Judges’ scores were statistically normalized to ensure that no matter which judges were assigned to an applicant, each proposal was given equal consideration.
How were semi-finalists chosen?
The proposals were evaluated and scored by an expert panel of judges and considered by MacArthur’s Board of Directors, which chose the semi-finalists.
What happens to all the good proposals that MacArthur does not fund?
Applicants will learn how their proposal was evaluated and will receive comments and feedback from our expert panel of judges. That feedback might help strengthen proposals for future funding requests or even the next cycle of 100&Change. Also, a public, searchable database of all the proposals will also be posted online later this year. That exposure could lead to other funding opportunities. MacArthur hopes this process will also engage the public and the philanthropic sector in a discussion about the best ways to bring about meaningful and measurable change to some of society’s biggest problems.