top of page

Evaluation and learning: A clash of cultures or logical companions?

Jo Scott (Ipsos UK) and Diane Redfern-Tofts (The Health Foundation)


For the many charities that are seeking to support and develop new ideas, solutions, and ways of working to address long-standing and complex social challenges, there is a need to create a "learning-positive" culture. This creates a space where teams can be agile, adapt to new information, and evolve their strategies. But what happens when you introduce "evaluation”, which is about making an assessment and judgment? Can a culture that embraces learning also be evaluation-positive? Or are these two ideas fundamentally at odds?  


This was the central question we explored at this year’s ChEW Festival of Impact and Evaluation.  


As the below table highlights, there are features that could be difficult to manage when evaluation and learning cultures collide.

Table 1: Features of ‘evaluation’ and ‘learning’ that are potentially opposing.   

Evaluation  

Learning   

  • Seeks information on performance, using set methods and often valuing empirical evidence to make an assessment of success.   

  • Prioritises creating safe spaces for open and honest reflections, being flexible in how learning is generated, and putting ‘unweighted’ value on subjective experiences.  

  • Detailed evaluation reports, often at the end of projects (closer to when outcomes are likely to be realised?). This acts as a barrier to providing timely and actionable insights to project teams.  

  • Focuses on opportunities to make iterative changes, translating findings into actionable improvements.  

  

  • By making an assessment of success, it can place accountability for strategic choices made in programme design.  

  • Seeks to cultivate a culture of innovation and experimentation, providing insights to enable people to respond to strategic shifts and personnel changes.  

  

It’s a challenge we’ve been navigating on the evaluation of the Health Foundation’s Tech for Better Care programme, which is an innovation funding programme exploring how technology can foster better, more proactive, and relationship-focused care at home and in the community. It quickly became clear that this was unlike any programme we had delivered before at the Foundation because it was so agile in its approach.  

We needed an evaluation that was as flexible and adaptive as the programme itself. This led us to developmental evaluation, an approach designed to support innovation in real-time by providing timely feedback that can be used to adapt and improve. And that was the right choice. 


With an increasing focus on emerging learning objectives during the programme, which weren’t priorities for exploration at the outset, we’ve had to reconsider if and how evaluation activities could be applied to support them. 


The ‘false dichotomy’ between ‘robust evaluation’ and ‘trust-based learning’ 

We believe you can have both ‘robust evaluation’ and ‘trust-based learning’. Kamna Muralidharan’s keynote speech at the Festival powerfully addressed this ‘false dichotomy’ and this theme was revisited throughout the day. It certainly resonated with our experience. The key is to reframe evaluation not as a judgment, but as a structured way of learning.  

To do this, we’ve embedded two key features into our evaluation design: 

  1. Using Contribution Analysis to frame the developmental evaluation. This helped us to be transparent about our focus and manage expectations about the evaluation. We designed our approach to be highly collaborative, with opportunities for teams to: 

    1. Co-create metrics to assess progress.  

    2. Participate in collaborative analysis and sensemaking sessions to reflect on emerging findings. 

  2. Adapting our evaluation framework to be explicit about the learning questions we want to explore. This has allowed us to be more intentional about whether, when and how our evaluation activities can be used to generate insights relating to these questions.  


What have we learned about fostering a dual culture? 


So, how do you create a culture that is both evaluation-positive and learning-positive? We’ve learned a few key lessons:  

  • It's all about relationships: The collaboration between the external and internal evaluation teams and programme teams is crucial. As Kamna Muralidharan so aptly put it, we built relationships before building reports.”  

  • Dedicated time and resources are non-negotiable: It's a common challenge that programme plans are relatively fixed, and programme management teams have little capacity to deviate. For an evaluation-positive and learning-positive culture to thrive, this needs to change. The Tech for Better Care programme team at the Health Foundation has prioritised time to actively participate in the evaluation process and act on the findings. This commitment has been essential to aligning our evaluation-positive and learning-positive approach.  

  • The power of accessible language: The words we use to explain our methods and evaluative judgements matter. For instance, our contribution analysis uses process tracing to assess causal links. Instead of getting bogged down in technical jargon, we've found it helpful to use Collier’s metaphors (e.g. ‘straw in the wind test’) to explain our approach. Making the concept more intuitive and engaging for the programme team. As Fraser Battye made the case for in his keynote speech, we need to “propagate ways of thinking and infect decision makers so they are acting like evaluators.” Cutting the jargon is a critical step in this direction.  


Logical companions 

Those features of evaluation and learning that appear opposing in our table above now seem mutually reinforcing. Perhaps that’s not surprising when we think about what evaluation is at its core: learning about whether, how and why change is happening. 

We know that getting the balance between learning-positive and evaluation-positive cultures is an ongoing challenge for many involved in ChEW – and this is a great network to share our different approaches and learning.


Let us know how you navigate evaluation and learning, and the tactics that have worked for you. 

Comments


About ChEW

About Us
Our Values

  • LinkedIn

©2019 by Charity Evaluation Working Group. The Charity Evaluation Working Group is a CIO registered in England and Wales, Charity Number 1184808

bottom of page