Better data, stronger buy-in: The case for bottom-up evaluation
- Nina Vafea & Corina Fung
- 1 hour ago
- 5 min read
“I don’t have all of the right information to plan the most effective evaluation.”
It’s a phrase familiar to almost every evaluator. But why do even specialists in programme evaluation, with great knowledge, methods, tips, and tricks, sometimes struggle to design successful evaluations?
The root might be found in our hesitation to design an evaluation from the bottom up.
As an example, let's take evaluating educational programmes:
Perhaps we simply assumed our partners would have sufficient capacity to distribute surveys, and now we haven’t collected enough data.
Maybe we didn’t involve the individuals who directly deliver the programme in the creation of the Theory of Change, so the insights are not truly relevant to their work.
Or it could be that we didn’t involve the direct beneficiaries in the evaluation, leading to a lack of awareness or prioritisation on their part.
At ImpactEd Evaluation, our experience shows that bringing these voices into the conversation early on improves both the quality of the evaluation and the culture around it.
But what exactly is a bottom-up evaluation?
Bottom-up evaluation is built on input from people “on the ground” - frontline workers, teachers, and users - before (or simultaneously with) involving managers, executives, or policymakers. It’s about designing an evaluation based on the invaluable experiences and insights of those closest to the real action. This resonates with the principles of participatory evaluation, which is a well-established methodology that champions inclusive stakeholder involvement in evaluation design and implementation.
Case Study 1 – Our partnership with SAFE Taskforce – Southwark
The Southwark SAFE Taskforce was set up by the Department for Education (DfE) in 2021 to reduce youth violence in Southwark. Between 2021 and 2025, the Taskforce commissioned six intervention providers working across 13 secondary schools and focusing on mentoring, Cognitive Behavioural Therapy (CBT), and extra-curricular activities.
When we started working on their evaluation, we knew we would be collecting significant data: pupil attendance at school, exclusions and behaviour points, pupil baseline and endline surveys, interview and focus group data as well as implementation data. Therefore, one of our key priorities was minimising teachers' time spent on the project.
In the first year, this approach involved automating much of the data collection through schools’ MIS systems. However, when we presented the first year’s report, it became clear that exclusions were tracked differently across schools. Teachers focused on comparing the report’s findings with their own experiences, and where they noticed inconsistencies, it affected their trust in the overall findings and drew attention away from the wider evidence base.
This experience became a valuable turning point. In the following year, we shifted our approach to actively involve teachers in the quality assurance process. While this required more of their time, it significantly improved confidence in the data, particularly in how exclusions were represented. The data aligned more closely with teachers’ expectations and experiences, and they felt greater ownership over the process.
As a result, teachers valued the emphasis on data accuracy and felt more directly involved in the evaluation. They began to see it as a genuine collaboration rather than a favour to us, and became increasingly interested in the findings and how their efforts contributed. This experience underscored the importance of not only involving but actively engaging those delivering the intervention – throughout design, analysis, and interpretation – to ensure both report accuracy and meaningful buy-in.
Case Study 2 – SHiFTing Towards a Bottom-Up Approach
We worked with SHiFT to redefine their shortlisting process. SHiFT is an organisation dedicated to systemic change and breaking the cycle of children and young people caught up in or at risk of crime. The shortlisting process is a crucial step where young people are selected to receive SHiFT’s support. Getting this right is fundamental to their mission.
Our goal was to standardise this shortlisting process across SHiFT’s regional practices. The idea was to create a consistent, reliable tool that all teams could use. However, we faced a challenge: how can we standardise something as nuanced as identifying young people who need support, without losing the invaluable judgement and insights that practitioners and professionals bring to the table? We didn’t want to create a rigid system that overlooked the very human element of their work.
This is where the power of a bottom-up approach truly shone. Instead of making assumptions about what would work, we made sure to include voices from the “field” – the very people working one-on-one with children and delivering the SHiFT programme. These are the individuals who would be using the tool we were designing, and their practical experience was indispensable.
SHiFT was instrumental in this. From the beginning, they agreed on the importance of involving the Lead Guides and Operational Leads in the conversation. The Director of Practice and Learning played a key role, acting as a link between our external evaluation team and the team delivering SHiFT in the local area.
Without these insights, we could have designed a “great” tool from a technical point of view. However, it likely would have fallen short of being useful for the organisation. Such a tool, designed in a silo, would have stripped away the vital nuance needed when doing meaningful work with children and families. A nuance that is, in fact, a defining characteristic of SHiFT’s goal of systemic change. By building from the bottom-up, we created a tool that enhances the shortlisting process and helps ensure the right children are chosen, contributing to more effective and impactful work.
How can we include all the voices needed?
A bottom-up approach to evaluation can be powerful. It’s about ensuring our evaluations are not only technically sound, but also practical, relevant, and truly reflective of the on-the-ground reality. So, when you sit down to design an evaluation, ask yourself: Is everyone who should be at the table, actually at the table? Do we have all the voices needed to make this evaluation possible?
Embracing this approach also means shifting our mindset. As Kamna Muralidharan highlighted in her keynote speech at the ChEW festival, “evaluation is learning together”. When we adopt this collaborative perspective, it builds trust and fosters genuine teamwork with everyone involved.
Building an evaluation with a bottom-up approach doesn't just lead to more effective outputs, as seen in the SHiFT case study; it also cultivates a stronger culture of learning and evaluation within an organisation, much like the example of SAFE Southwark.
Nina Vafea & Corina Fung. ImpactEd Group supports education and purpose-driven organisations to maximise their potential. We do this by helping our partners to be consistently impactful and operationally sustainable.
Comments