Chantell Johnson, Managing Director of Evaluation, discusses how we evaluate the big picture of our strategies, rather than individual grants or grantees.


 

While each area of work may be at a different stage of development—from newly articulating programmatic goals to grantmaking that has taken place for several years—there are no exceptions to our practice of evaluating all programs. But what does that mean in practice?

When we talk about informed decision making, we are most often talking about decision making by the Board on Foundation priorities or by team leaders about the strategic direction of programs. We need information ranging in rigor to inform what we know, to challenge what we think, and to require us to be good stewards of the Foundation’s resources. Moreover, we aim to place the evaluative spotlight on our strategic choices, recognizing that there is much we need to learn along the way and believing that evaluation can be a mechanism for bringing information that will broaden our perspectives.

We care deeply about grantee’s success and closely follow and support their work as it unfolds through grantee reports, phone calls, and sometimes site visits. However, we do not place individual grantees or grants at the center of our evaluative activities. That is, when we are evaluating our programs, we do not evaluate the work of an individual grantee or grant. We trust they are doing what they said they will do, that they can do it well, and that it will yield results.

Rather, we focus our evaluation resources on big picture questions that span portfolios of grantees/grants, guide our strategic direction, and manage our resources at the highest levels.

 

Abstract Illustrations With Wrench and Clipboard

 

We designed our approach to evaluation to focus first and foremost on the range of questions that program team leaders have as they make choices about how best to deploy these resources. Questions like:

  • Landscape: What is the landscape for my program area? How is it changing?
  • Feedback: What are we doing well? Where can we improve?
  • Outcomes: What progress is evident? With what consequences?
  • Impact: Are we and how are we contributing to meaningful impact?

Our focus in our evaluative work is on learning about the landscape, learning how we can do our work better, and learning from across portfolios of grants over time. This evaluation practice ensures we are looking at the forest, rather than the trees, and supports the dynamism our strategies need to succeed.

 


More Perspectives on Evaluation ›