Break the Rules, Drive the Change
- Dr Hannah Griffin-James
- Oct 28
- 8 min read
For years, evaluation has been guided by a set of unwritten rules - rules designed for a different era, for different goals. But what if these very rules are holding us back from achieving the deep, transformative impact we truly seek in charity work? What if it's time to redefine what effective evaluation truly means?
This isn't a call for anarchy or a dismissal of rigour. It's a somewhat provocative push to rethink our role as evaluators, challenge our assumptions, and embrace a more impactful way of working.
For new freelancers and those transitioning into charity evaluation, understanding these nuances is key to driving change with evaluation. The world's complexity demands equally adaptive evaluation methods.
Here are 3 “rules” of evaluation that could be doing more harm than good, and how we can break them to accelerate meaningful progress.
Rule 1: Lengthy Reports Equal Impact. (Spoiler: They Don't)
The Old Rule: The measure of a successful evaluation is a comprehensive, data-rich, and often door-stoppingly thick report.
The Reality: These tomes, filled with academic jargon and dense figures, often end up as unread shelf-ornaments. They fulfil a contractual obligation but rarely start the conversations that lead to genuine change.
Impact isn't measured in page count; it's measured in influence. And influence comes from buy-in, especially from leadership and key decision-makers.
It’s crucial to acknowledge that the tradition of comprehensive reports was born from a laudable desire for thoroughness and accountability, ensuring every detail was documented and transparent.
Break the Rule: Engage Decision-Makers Directly to Address Their Key Challenges.
Instead of pouring all our energy into a polished document, let's focus on creating moments of connection and conversation. Transform the post-evaluation debrief from a formal presentation into a live, interactive showcase of key findings.
This is where the magic happens - where questions can be asked, assumptions challenged, and immediate next steps discussed.
Struggling to get them to commit? Make it clear why their presence is crucial. Highlight the credibility they bring to the space and the powerful signal their attendance sends to the rest of the organisation.
Give them a meaningful role, like co-presenting a key finding or hosting a strategy review. If you need to, send them a compelling two-minute video ahead of time showing what the programme is and what it has achieved to pique their interest.
As they attend the event, give them a concise one-pager - sent via email or handed to them in print. Don't just rehash the main headlines. Share the powerful, intriguing nuggets that hint at a deeper story, the "there's something going on here" findings. This clues them in, makes them feel like insiders, and empowers them to champion your findings and magnify their influence.
Pro Tip: Flip the Message for Different Audiences.
The showcase is for decision-makers, but what about the service users themselves? Don't just disseminate findings, turn them into a valuable resource. If a programme supports mental health, don't lead with a headline like "Rates of depression were reduced by 30%."
Instead, flip the message. Create a resource that starts with "Here are 5 ways to access support when you're feeling low," and then use the evidence to show why these methods are effective. It reframes the evaluation from a judgment into a tool for continued support, empowering those who directly benefit from the programme.
Real-World Application:
Recently I worked with a UK charity to understand how faith-based groups in their area supported their local community. Instead of a lengthy report, we opted for a concise, interactive session.
The majority of the time was dedicated to a Q&A, allowing the charity staff to directly ask questions and receive immediate, evidenced-based feedback. This direct engagement led to immediate changes, and inspired additional focused research projects.
Similarly, last week I provided an interim report as a live, interactive session, to an NGO I’m working with, evaluating their mental health programme. They were grappling with specific programme elements they felt weren't working.
They were already exploring changes, so rather than waiting until the evaluations end (a year away!), this early, direct feedback allowed them to take immediate evidence-based action. What's the point of waiting to learn a service isn't working just because the evaluation timeline dictates a final report much later? Real change means timely intervention.
Rule 2: Independence at All Costs.
The Old Rule: To ensure objectivity and validity, the evaluator must remain a detached and independent observer.
The History: This rule is rooted in a positivist tradition that views the evaluator as a neutral scientist. While the intent to minimise bias is laudable, a rigid adherence to this principle can limit our potential for impact. It can turn evaluation into an extractive process, taking information from a community without giving anything back.
The emphasis on evaluator independence was rightly established to safeguard objectivity and minimise bias, aiming for unbiased, scientific rigour.
Break the Rule: Be a Catalyst, Not Just an Observer.
We can maintain our analytical rigour while actively contributing to the change we are documenting. Evaluation can and should be a force for good, not just a record of it. Your role, especially as a freelancer, can extend far beyond delivering a report.
Firstly, consider how we can turn evaluation tools into learning moments. Surveys and interviews, for example, don't have to be one-way streets. Reflective diaries, for instance, can offer service users a structured space for self-reflection while simultaneously collecting rich, narrative data. This approach naturally shifts the dynamic from "data collection" to "co-learning."
Beyond the tools themselves, we have the opportunity to share our social capital. Evaluators frequently find themselves in rooms that service users typically cannot access. Rather than simply holding a showcase for your findings, transform it into a direct opportunity for those you serve.
If service users can build a direct relationship with a CEO or funder in that very room, it moves beyond evaluation merely driving impact; it becomes about extending crucial social capital to those who need it most.
Finally, a truly catalytic approach involves actively building a talent pipeline. This means offering paid evaluation work experience directly to service users. Such initiatives cultivate a pool of evaluators with invaluable lived experience, whose insights will inherently be sharper and their recommendations more relevant.
This strategy not only enhances individual capabilities but also strengthens the entire sector by genuinely diversifying voices and perspectives within the evaluation community.
By breaking this rule, we extend our impact to enhance their impact. We become not just evaluators of change, but agents of it.
Real-World Application:
I facilitated a participatory evaluation session with community leaders, for a mental health programme. When they shared challenges around supporting individuals with dementia in their day jobs, I, drawing on my background as a psychology lecturer, shared recent research on emotion and dementia.
This wasn't about me 'collecting data'; it was about moving away from extraction, to sharing and collaborating. Simple sharing transformed the atmosphere from an extractive process to a collaborative learning journey.
As I wrapped up an interview recently, I asked my standard, 'Is there anything else you think I should be aware of, or anything you want to share?' The participant then unexpectedly asked, 'You've now heard all about my organisation. What can you see in terms of where we could look for funding? What opportunities are we missing?'
This took me by surprise initially, but it highlighted a crucial point: if welcomed, an evaluator can be an amazing resource for charities, beyond just assessing outcomes. We become an additional 'brain' in the room, helping organisations maximise their potential.
Rule 3: Stick to the Null Hypothesis.
The Old Rule: In quantitative evaluation, we must start from the assumption that the programme has no effect until we can prove otherwise.
The Reality: While the null hypothesis is a cornerstone of the scientific method, clinging to it too rigidly in social programme evaluation can lead us to miss what truly matters. It can narrow our focus to a limited set of predetermined outcomes, potentially overlooking unexpected but significant changes.
Relying on the null hypothesis in quantitative evaluation stems from a core scientific principle designed to prevent false positives and ensure statistical integrity in our conclusions.
Break the Rule: Mock Up Potential Results and Validate Their Value with Stakeholders.
This may sound counterintuitive to traditional scientific method, but it is a powerful way to ensure you are measuring what matters and framing it for maximum impact. Before finalising your data collection tools, learn about your clients' strategic goals. What funding are they targeting next? What evidence will that funder adore? What is buzzword or theme of the moment in their sector? Who sits on that funder's board?
Use this knowledge to create visual mock-ups of potential findings, or even just headline statements of statistical results. What might be immediately apparent to an experienced evaluator, such as the implications of a particular method, may not be to all stakeholders.
For example, if your initial thought is to report '70% of participants feel less isolated,' run that specific framing past your stakeholders. Ask:
"If this were the message from our statistics, how would it be interpreted by you, stakeholders, funders and your service users?"
"Is 'feeling less isolated' the most useful finding for your strategic needs right now, or are you more interested in, say, participants being less anxious or more capable of asking for support?"
"Would this specific piece of evidence help you make a compelling case to our next potential funder?"
"Are there any gaps or blind spots here that this framing might miss?"
This collaborative approach is particularly useful in managing the common challenge where stakeholders, while seeking comprehensive data (and often seeking to add more and more questions!), also desire concise and brief data collection methods. This process aligns the evaluation with the organisation's real-world needs for both impact and sustainability.
It gives programme staff a genuine say in what gets measured and how it's presented, working with them to shape the evaluation into a tool that serves them long after the report is filed. It ensures that the 'so what?' of your findings is addressed before you collect the data.
Navigating Systemic Barriers: Beyond the Rules
I can't write this and not acknowledge systemic barriers. Predominantly, the pervasive short-term funding cycles stifle the long-term, complex evaluations required to demonstrate deep-seated social impact, forcing a focus on immediate, often superficial outcomes rather than sustained change.
Although this is getting better in the UK. This is compounded by limited evaluation budgets within charities themselves, often lacking the data infrastructure, staff capacity, and time needed for rigorous measurement of nuanced, intangible societal shifts.
Furthermore, the inherent political sensitivity of social issues can create a reluctance to embrace truly objective findings, particularly if they challenge established practices or funding narratives, leading to a focus on proving rather than improving and ultimately hindering the sector's collective ability to learn and adapt effectively.
The Way Forward: Brave, Bold, and Service User-Centred
Breaking these rules isn't about cutting corners. It's about being smarter and more strategic with our limited resources, equipping you, the evaluator, with the tools to truly thrive and make a difference. What I've found is that adopting these practices doesn't just refine your craft; it sets you apart.
You'll become the kind of strategic partner charities actively seek - the one who delivers tangible, measurable change, not just another report. This is how I distinguish myself, and how you can too.
Breaking Rule 1 saves time and energy by focusing on key takeaways and getting them into the hands of the right people to drive change, transforming reports from artefacts to activators.
Breaking Rule 2 builds stronger, more reciprocal relationships and fosters a more robust and insightful evaluation sector by integrating evaluators as catalysts for change.
Breaking Rule 3 makes it easier for organisations to define what is truly needed from an evaluation, leading to more relevant and useful findings that drive both impact and sustainability.
Let's challenge the old orthodoxies. Let's push the boundaries of our profession. If we have the courage to break the rules, we won't just evaluate change; we will be at the very heart of making it happen, solidifying our place as indispensable partners in social transformation.
Ultimately, your ability to break these rules will elevate your standing as an evaluator. It's not just about doing good work; it's about being known for driving real change.
My challenge to you is this: Pick one 'rule' to break in your upcoming project. What direct, tangible benefit do you predict this shift will bring to your client, and to your own professional growth? I'm keen to hear your results.
About the Author
Dr Hannah Griffin-James is a seasoned evaluation professional dedicated to fostering genuine social impact through innovative and inclusive approaches. With a PhD in Inclusive Science Education, she brings a unique understanding of how scientific knowledge is generated and interpreted, directly informing the development of robust, equitable, and impactful evaluation methods. Learn more about Dr Hannah's work and connect at https://www.linkedin.com/in/dr-griffin-james/

.png)



Comments