Shifting power in evaluation
- Kamna Muralidharan
- Jul 1
- 7 min read
In this powerful keynote-turned-blog, Kamna Muralidharan, keynote speaker at ChEW's Festival of Impact and Evaluation, invites us to rethink what evaluation is, and more importantly, what it could be. She explores how evaluation can shift from a tool of compliance to a practice of collective learning and trust...
Good morning, everyone. It’s an honour to speak with you today about a question that’s both technical and profoundly political:
“How can we make the most of evaluation?”
We like to think of evaluation as neutral and objective, but in practice it often mirrors the inequalities in our society. Who designs the evaluations, who asks the questions, who gets heard – these are shaped by structural biases. Too often, our processes inadvertently favour the comfortable and the privileged, while side lining others.
We have built an evaluation culture that privileges certain types of knowledge over others – where sense-making is too often shaped by norms that are ableist, classist, sexist, and racist.
That which doesn’t fit is ‘othered’ – not just ignored but actively erased. It shows up in subtle ways: in the language we use, the data we consider valid, and the norms of who is considered an “expert.”
And as evaluators, funders, commissioners, we have a responsibility to reframe how we work. The first step is acknowledging that what we call rigorous might actually be riddled with blind spots. It’s time to own up to the fact that the way we’ve always done evaluation is not serving everyone equally – and it never really has.
Because when we talk about “evaluating change,” we often mean “measuring conformity.” Measuring how well someone else's lived experience fits a worldview built without them in mind.
That’s not just ineffective. It’s unjust.
A Flawed Choice: Rigor vs Trust
Too often we’re told there’s a trade-off: You can either do robust evaluation, or you can embrace trust-based learning. You can have credible evidence, or you can build equitable relationships. That’s a false dichotomy.
One of the biggest challenges we face is the dominant framework of “impact” and “evidence”. For decades, we have privileged quantitative, positivist data – the numbers, the statistical significance, the randomized controlled trials – as the highest form of truth.
Don’t get me wrong, numbers have their place. But somewhere along the line, we started behaving as if it can’t be counted, it doesn’t count. We’ve developed what someone much smarter than me called the “plague of metric monomania” – an obsession with easily measurable results. We count how many people showed up, how many outcomes ticked the box, and we call that success. We embrace this approach for the perceived safety and certainty that numbers provide. But these rigid metrics are often extraction devices: they strip away context, they ignore depth, and they miss nuance. In focusing on what’s easy to measure, we risk losing sight of what’s truly meaningful.
If traditional evaluation is one-way and retrospective (like an audit or a post-mortem), what would a better approach look like? I believe it looks like trust-based learning. This means shifting from evaluation as policing or proving, to evaluation as learning together. It means replacing extraction with dialogue.
Trust-based learning flips the script. Instead of starting from suspicion, we start from trust – trust that communities and grantees are doing their best and have valuable knowledge to share. The role of evaluation then shifts to creating a space for reflection and sense-making.
What does this look like in practice? It looks like collaborative framing of success from the very beginning. Traditionally, success metrics are often decided in a boardroom, far away from the people on the ground. Under a trust-based model, we would co-create those definitions. Imagine asking a group of service users or community members at the start of a project, “What would success look like for you? What changes would actually improve your life or community” and then letting those answers shape the evaluation criteria.
That’s powerful. It means success is not measured as progress towards ‘conformity’, rather it’s something that is meaningful to the people intended to benefit and it makes space for complexity and mess, because no social change is linear and neat.
It also looks like joint sense-making. Instead of the evaluator disappearing to write the report in isolation, they convene conversations: What are we seeing in the data? Does this resonate with your experience? Why might we be observing these outcomes? – asked collaboratively with those who were part of the program. It’s sitting around the table and interpreting results together. When you do this, something magical happens: evaluation stops feeling like a verdict and starts feeling like a shared journey. People see their input reflected; they feel heard; trust builds.
Finally, trust-based learning is forward-looking. Current evaluation practice tends to ask retrospective questions: “What happened?”But we rarely pause to ask: “So what? Now what?”That shift—from documentation to direction, from measurement to meaning—is where the future lies.
Rather than only asking “Did it work?” at the end, we’re constantly asking, “What are we learning and how will we apply it moving forward?” The goal isn’t to pass a test, it’s to improve together. This approach treats evaluation as part of the ongoing work, not a post-project afterthought. And it treats the relationship between funder and community as a partnership of equals in learning. When we operate with trust, we allow for vulnerability – people can admit what didn’t go well without fear, because the point is to solve problems, not to assign blame.
Can you imagine how much more candid and useful our evaluations would be if people weren’t afraid of funding cuts for being honest?
IVAR’s Evaluation Roundtable reminded us this year that trust-based learning and robust evaluation aren’t opposites—they are partners in service of change. But that requires a shift in what we’re even trying to do with evaluation in the first place.
Let me give you a real-world example of what trust-based learning looks like. Over the last few years, in a previous role working to address children’s mental health inequities, we realised the programme couldn’t achieve its goals if it didn’t include the voices of parents living in the communities they were trying to serve. So we co-designed a ‘Parent Panel’—not a token group for feedback, but a group of ten parents who actively helped shape decisions and learning.
We paid the parents for their time and offered childcare and translation—because equity starts with practicalities. We discussed power explicitly. We ate meals together. We built relationships before building reports. And we didn’t just ask parents to respond—we asked them to decide. The result was better decisions – the trust built led to richer insights into their lived realities and more impactful choices. Our take-away: the real harm isn’t in trying and failing to share power. The harm is in doing nothing differently at all.
That, to me, is what trust-based evaluation should feel like: mutual, messy, relational—and deeply valuable.
This example brings the abstract into the tangible: it’s about who decides, who defines success, and how we share power. It also models practices like co-design, fair pay, accessibility, and relational trust as part of the evaluation process.
Investing in Infrastructure That Reflects Our Values
So, how do we truly shift power in evaluation and make it more meaningful? Three suggestions:
First, put money where it matters: directly into evaluation infrastructure that empowers. This means funding the people, tools, and structures for truly equitable assessment. At IVAR, our annual Jane Hatfield Award exemplifies this. This £5,500 grant, in partnership with the Ubele Initiative, backs young Black and minoritized researchers to lead studies on community and social justice. Why is this crucial? Because it cultivates the next generation of evaluators and researchers from communities who have too often been the subjects of research rather than the leaders of it. While a modest sum, this investment profoundly strengthens the knowledge infrastructure, ultimately building a richer, more pluralistic evaluative field.
Second: earmark funds for co-evaluation processes. Traditional evaluations often budget for an external consultant to come in. What if part of that budget was instead used to pay community members or program participants as co-evaluators? We saw it in the Parent Panel – they budgeted to pay parents for their time and expertise. That not only is fair (people should be paid for labour), it also sends a message: your knowledge is valued just like any consultant’s.
And thirdly, powerful evaluation approaches exist in the Global South – we must learn from them. Take Community Scorecards: a prime example. This participatory, community-led monitoring and evaluation tool empowers citizens to directly assess the quality of public services (e.g., health centres, schools, public transport). By informing communities of their entitlements, then enabling them to score service performance and engage providers in dialogue, this approach empowers citizens to hold institutions to account in a very local, immediate way.
Internal Culture: Where Does Power Lie?
Yet, even with new tools, new frameworks, and new investments, nothing will change if the culture within our organizations don’t change.
Three Provocations to Leave You With
So, in closing, I want to leave you with three provocations – three questions to carry back to your work, your teams, your boards. These are about purpose, power, and knowledge, the themes we’ve explored:
Purpose: What is the real purpose of our evaluation efforts? Are we doing it to prove something, or to improve something?
Power: Who has the power in our evaluation processes, and who should have it? Take a hard look at who frames the questions and who gets to interpret the answers. If it’s always folks like us (the professionals, the funders, the external experts), ask yourself why. Power won’t rebalance itself – we have to be intentional in handing over the mic.
Knowledge: Whose knowledge do we value and validate? When we say “evidence,” what images pop up – is it a graph, a report, an academic study? Can we expand that picture to include a story, a community meeting, a lived reality shared in someone’s own words? Are we listening as hard to the whisper of the grassroots as we do to the loudspeaker of data? The provocation here is to broaden our definition of expertise. How will we change our systems to elevate those other forms of knowledge?
These are not easy questions. But I believe they are liberating ones. Because if we ask them earnestly, we open the door to a richer, more meaningful practice of evaluation – one that is about dialogue, not just data. We can move from a culture of scrutiny and scepticism to a culture of trust and learning.
So I leave you with a message of possibility: that evaluation, done differently, can actually empower and unite. It can be a force that shifts power to those whose voices haven’t been heard.
Thank you for listening, and for caring about doing this work better. The invitation today is to turn data into dialogue, and in doing so, turn evaluation into a tool for equity and transformation. The power is in our hands – let’s share it wisely.
Thank you.
Kamna Muralidharan, Director of Grants and Development, Buttle UK.
.png)



Comments