Commissioning Evaluation for Innovation, Impact, and Continuous Learning
- Rupal Anand & Jessica Cox
- 36 minutes ago
- 5 min read
From inspiration to intention
In 2024, we (GambleAware) attended ChEW’s first Impact and Evaluation festival as commissioners keen to deepen our understanding of how evaluations can be made more equitable and relevant. We left inspired by the festival’s focus on equity and the brilliant work shared by colleagues across the sector.
That inspiration reaffirmed our journey towards making commissioning, evaluation, and learning more inclusive, accessible, and ethical. Last year, we returned not just as attendees, but also as presenters to share how our practice has evolved, and to keep reflecting on where we need to go next.
At the heart of our work is our commitment to equity, not just in the outcomes we seek, but also in how we get there. And just as our work has grown, so have our questions, challenges, and learning.
Embedding equity at every level
For us, this journey has meant embedding equity ground-up at every level of the process. It started by working with GambleAware’s Lived Experience Council at a strategic level, but has since included other mechanisms, including lived experience representation on funding decision panels, and their direct participation in scoring funding applications.
Lived experience contributions in evaluations through co-designing and sense-checking learning frameworks and emerging findings have been invaluable in making our evaluations grounded in the needs of the communities we serve.
Learning through grant-making
This shift also came through one of our grant-making programmes, the Community Resilience Fund (CRF), which aimed to raise awareness and alleviate the impact of gambling harm among communities in context of the cost-of-living crisis.
The fund wasn’t just about service delivery; it gave us the space to explore new ways of learning and evaluating. We used it as a launchpad for the Improving Outcomes Fund (IOF) – which focuses on reducing inequality in gambling harm amongst women and minority communities. Here, we’re now asking:
How do we move from raising awareness of gambling-related harms to co-developing equitable support systems that actually work for people who are disproportionately affected by gambling harm?
The IOF wasn’t designed in isolation. It was grounded in evidence, including key research on gambling harms affecting women and racially minoritised communities, and shaped through an outcomes workshop that brought together people with lived experience, community groups, commissioners, and researchers.
When we issued the tender for an external evaluation and learning partner, we were clear about what mattered: real experience working with women and minority communities, and evaluation approaches where equity wasn’t a buzzword but a foundational principle.
Importantly, the scoring panel included a person with lived experience of gambling harm. Their involvement helped us stay grounded in the realities we wanted the evaluation to centre.
Trust, relationships, and real-time learning
One of the most powerful enablers of this shift has been our move toward trust-based grant-making. Since commissioning the evaluation, we’ve been building more collaborative, trust-based relationships with grantees and partners. That means regular touchpoints, not just for monitoring but for joint reflection to surface what’s working, what’s shifting, and what still needs to be figured out.
We’ve created space for honest conversations and learning to flow both ways. We’re also getting more comfortable letting go of predefined metrics when they no longer serve the work. Instead of asking only whether outcomes are being met, we’re asking whether they’re still the right outcomes to measure.
This means that learning has been an ongoing dialogue, not a static report at the end. We’re adapting in real time, using feedback to course-correct and evolve the programme.
Wrestling with power and positionality and competing expectations
All of this has also raised questions about power. Many of the systems we work within, including evaluation, are still shaped by dominant norms around what counts as credible evidence, who gets to speak on behalf of communities, who holds the pen when writing up findings. Even when minoritised voices are invited in, they’re often expected to conform to norms that weren’t built with them in mind.
We’ve been reflecting on how we as funders and commissioners can challenge this, rather than reinforce it, and what it looks like to share power meaningfully at each stage of the evaluation process.
A key challenge for us continues to be managing the needs and expectations of people with lived experience, community organisations, treatment providers, researchers, funders, and policymakers. Each has a valid perspective, but also different timelines, priorities and definitions of success. Balancing these expectations can be difficult, and part of our work has been to sit with that discomfort rather than rush to resolve it. We’re learning that navigating through this process is about holding space for difference and staying rooted in the principles we’re trying to uphold. That means going beyond inviting participation to actively shifting who defines success and who leads learning.
Reflections from the day
These are the kinds of challenges and questions we brought to the festival last year, and Kamna’s keynote helped crystallise some of them. Her clarity in naming how evaluation itself can mirror broader social inequalities often shaped by norms that are, as she put it, ‘ableist, classist, sexist, and racist’. Her provocation around who holds power in evaluation, and who should, has stayed with us. It reminded us that even well-intentioned approaches can reproduce harm if we don’t interrogate who sets the agenda and how learning is interpreted and acted on.
We also found resonance in Girlguiding’s work on participatory theory of change. Their creative, visual methods stood out not just for their accessibility but also for how they facilitated genuine ownership. It prompted many of us to reflect on how our own work still relies on text heavy, linear frameworks, and how often jargon and formality, whether in language, format, or process, unintentionally exclude the people we want to hear from the most. We continue to think about how we can shift towards tools and processes that feel intuitive and empowering for communities, and not just for funders.
The keynote on wellbeing sparked further reflection. Their challenge to the sector to centre wellbeing felt especially relevant and made us question the point of ticking KPIs if people are left feeling more isolated, anxious, or disempowered. It affirmed our growing sense that equitable evaluation isn’t just about who we listen to, but also about what we value. If we want to support long-term change, then we need to recognise the emotional forms of impact.
The festival also challenged us to reflect on the role of funders not just in commissioning evaluation, but in amplifying and translating findings over time. We have a responsibility to make sense of emerging evidence and to ensure it informs future policy, funding and strategy decisions. We have to become active amplifiers of evidence to shape policies in response to what we learn. We also heard a clear call to rethink the length and format of reporting. Sometimes, the most meaningful findings don’t need to fill 120 long pages. They can emerge through well-timed conversations, visual tools, or short memos.
Takeaways: What we’re still learning (and unlearning)
So, what are the key takeaways that we would like you to think about?
1. Normalise and embed lived experience into processes, not just projects.
2. Utilise the assets and insights already present in communities. Use them in evaluation, service design, communications, and governance.
3. Time, funding and trust are essential. We can’t ask communities to lead without resourcing them properly.
4. Foster and nurture ecosystems of change and create safe spaces for learning, feedback and iteration.
5. Integrate the work across systems. Gambling harm, like many complex issues, doesn’t exist in isolation - it intersects with mental health, domestic abuse, poverty and homelessness, migration, and much more. Evaluation must reflect this complexity.
*About the authors:
Rupal Anand is the Senior Evaluation and Evidence Officer at GambleAware
Jessica Cox is the Innovation and Development Manager at GambleAware
.png)



Comments