Critical Friends: Using External Partners for Evaluation

September 23, 2020 Perspectives
, author
Joanna Cohen
Former Senior Evaluation Officer
, author
Elizabeth Oo
Senior Evaluation Officer

Joanna Cohen, Senior Evaluation Officer, and Elizabeth Oo, Evaluation Officer, share why and how we work with external partners to evaluate and learn from our work.


 

Seeking: Critical friend. Must love learning.

At its simplest, our approach to evaluation recognizes that diversity of thought, sharing knowledge, and including many voices is essential to making informed decisions about how to advance our mission and values. In practice, we enact that approach by commissioning an evaluation and learning partner (ELP) to support each of our program areas. ELPs are a team of external professionals who serve as our “critical friend,” independent from the Foundation but invested in helping us effectively implement our strategies.

From the onset of a strategy, ELPs collect and synthesize information from a diversity of perspectives, using myriad methods, and a range of sources to help us challenge our assumptions; accumulate evidence to understand whether our resources are being deployed effectively; and strengthen relationships with grantees and other partners. While the ELPs bring a new perspective, they must embody the same commitment to diversity, equity, and inclusion in their work that we do.

 

IllustrationOfPersonWithSpeechBubblePencilAndMeasuringTriangle

 

Selecting an Evaluation and Learning Partner


When we select ELPs, we consider their expertise with multiple methods and data types. They must have cultural competency and the compulsion to learn, probe, dive deep into the programmatic topic (an issue, a place, etc.), with the sensitivity to the specific needs, conditions, nuances of the subject or community. ELP teams should have a diverse range of backgrounds, experiences, and perspectives and be willing to embrace feedback and perspectives from the program team, the Office of Evaluation, grantees, and other stakeholders. ELPs must also be expert facilitators, capable of communicating nuanced findings, some of which are not positive, so programs can think critically about “what now?” based on evidence.

When we began working with ELPs five years ago, we designed a selection process that we hope is fair, transparent, and informed as well as efficient. We collaborate with the program team, grantees, and other colleagues throughout the process to help us develop a comprehensive and diverse initial bidders list. We do not make open calls, simply because it takes significant resources to develop a proposal for evaluations of ambitious, dynamic strategies, and not every evaluator—no matter how skilled—will have the capacity (staffing, time, etc.).

Then a preliminary interview helps gauge whether the initial bidders have the skills or interest in moving forward. And after the list is narrowed, we typically share the names of candidates with the pool of bidders so that they can opt to join forces to complement skills and build more capacity. Then we collaborate with program teams on asks and expectations; give bidders over six weeks to develop proposals; and finish with final interviews with program teams.

 

Working with Evaluation and Learning Partners


Once selected, we expect ELPs to do their work with rigor, candor, and intellectual honesty. In doing so, we aim to garner a comprehensive understanding of what works, what is not working, and what we should consider adjusting when it comes to our programmatic strategies. Over the course of a strategy, ELPs:

  • Document, refine, and test assumptions underpinning a program’s theory of change;
  • Track enabling and inhibiting factors within the landscape of a strategy;
  • Use a range of methods and draw on a variety of sources to measure progress and then evaluate the extent to which our resources have contributed to those changes;
  • Solicit feedback on the implementation of a strategy from grantees, partners, and other stakeholders; and
  • Help us make sense of all the information they collect and analyze so that we can use it to inform critical management decisions.

As they do their work, ELPs prioritize collecting input and feedback from individuals and groups outside of our grantee circle. We want to ensure our learning partners are not just gathering information from people and organizations within our networks, who might have similar perceptions or might be incentivized to affirm positive stories about our work.

We acknowledge that ELPs also carry implicit biases and unattended power dynamics that we all grapple with, which could unintentionally skew findings. So, we work with ELPs to actively mitigate those limitations and increase the credibility of their findings. For instance, as part of completing an evaluation report on the On Nigeria Big Bet strategy, we held validation workshops with grantees to ensure that the findings reflected the lived experiences and perspectives of those closest to the work.

The approach we take with ELPs is still evolving and—just as with our strategies—a relationship that we continue to adapt based on what we learn along the way. We regularly solicit feedback from staff, grantees, and the ELPs about how this model works and have been encouraged by what has surfaced. We know there is more we can do and welcome ideas that can enhance the use of evaluation to drive positive, meaningful, and long-lasting change.

 


More Perspectives on Evaluation ›