At MacArthur, we use evaluation to help maximize the results of our grantmaking. Evaluations help us make informed decisions, improve our work, and lead to more effective use of our resources. For 100&Change, we are evaluating the application and selection process to understand what worked well and what has not, to improve the process for the next cycle of the competition. We have engaged an outside evaluator to conduct interviews with organizations that registered their interest in the competition, applicants who submitted proposals, semi-finalists and finalists to learn about their experience.
We want to learn how individuals heard about the competition and the perceived benefits and challenges of participating in 100&Change. We also want to better understand the amount of effort it took for organizations to apply.
We want to learn how individuals heard about the competition and the perceived benefits and challenges of participating in 100&Change.
As we are evaluating our efforts for 100&Change, it is also our expectation that the winner of the competition will document what it learns as the project evolves, accounting for the context and environment in which it works. In addition, the winner is expected to evaluate progress toward the implementation of the proposed solution and its intended and unintended impacts.
As part of the competition, applicants were asked to briefly describe how they planned to evaluate their projects. Applicants who made it to the semi-finalist round were then asked to build on their initial submissions by providing more detail. Based on our review of proposals, semi-finalists described thoughtfully how they would continuously monitor progress, systematically collect and analyze data, and receive feedback from external advisers. In addition, teams planned and built on existing knowledge and resources to adapt and improve their projects over time.
We also want to better understand the amount of effort it took for organizations to apply.
Yet, as we reflect on the process, we are not completely satisfied with how our evaluation questions were incorporated in the application. We thought having a stand-alone section would eliminate redundancy; however, we now believe it caused a disconnect between the solution and strong evaluation and learning. This was a missed opportunity to have semi-finalists demonstrate how evaluation is integrated into their projects. For example, semi-finalists were asked to include tasks needed to implement their project, including their work plan and timeline. This simple exercise demonstrated to them and us the need to closely tie how they would evaluate project implementation.
In the next iteration of the competition, we will consider incorporating evaluation questions in the project planning section of the application where we ask about data that are critical to demonstrate the success of proposed solutions and how applicants will collect and use such data. Our hope is this improvement will help responses read less like a static and independent process and more like one that is as dynamic and cohesive as the proposals that were submitted to make real and measurable progress towards solving a critical problem of our time.