• Georgia Leith

Defining 'evidence' and it's role in decision making

Updated: May 7

How do we understand evidence and use it effectively? In his second blog of the series CHEW Trustee and Director of Salisbury Kuczkowska Consulting, David Salisbury shares his thoughts .


Evidence is a key word in the third sector. It’s a widely used term and much has been written about it. But I’ve often found when talking to colleagues in different roles or from different specialities that their perception of what “evidence” means is different from my own. Those from research, data science, or insight backgrounds might have slightly different takes which vary on a theme. Equally, those from community engagement, professional engagement, front line delivery and senior leadership each might have their own view.


The potential for misunderstanding is huge and I’ve lost count of the number of times I’ve heard research and evaluation colleagues, talking in private, use some variant on the phrase “they just don’t really get it” or “they say they are supportive but they don’t realise the implications for the organisation.”


This is my first of a what will be a few blogs that link back mostly to the “language” element of culture which I mentioned in my last post. Here I cover my definition of evidence, it’s role in decision making, where it’s value lies and a broad rule of thumb about how hungry we should be for it.


It’s important to be clear about your definition of evidence


As mentioned above, the potential for misunderstanding of the term “evidence” is huge and something that I’ve run into within the sector several times. I define evidence as a general term to cover the information, facts and insights that are drawn from research, data analysis and evaluation.


Evidence can help us make better decisions but it’s not the only factor


The diagram below derived from NESTA’s Using research evidence: a practice guide*. It sets out four factors that can influence the making of decisions.






“Context and circumstances” might include political, economic or cultural factors, or the practicality and feasibility of moving something forward. The positions of our stakeholders and our relationships with them may also influence the decisions we take. Our past experiences and personal judgement can also be important when making effective decisions.


“Evidence” is one of the four components here (and it’s an element of decision making that has been brought into sharper focus due to the current COVID-19 crisis). In my view the best decisions consider all four factors, balancing the various perspectives. You might have great evidence for something but if the stakeholders involved won’t get behind that decision, for whatever reason, then it’s going to be very, very hard to make that decision work. Programmes can be really well evaluated but there might be great reasons why it won’t work in your context or - perhaps more pertinent in the times we are in - a major event comes

along and you have to deal with the fallout of that first.


Evidence improves our confidence about the extent to which something is true or not


There is a quote, I believe from Aristotle that a colleague and friend of mine used to use a lot. It goes:


“It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible.”


Or in other words, as my colleague would say: “It is better to be roughly right than precisely wrong.”


In my working life I have found it very rare for evidence to provide absolute certainty about something. Those working in evidence (or making decisions with it) should not get lost in some quest for absolutes. Instead, we should aim to improve confidence. By establishing how confident we can be that a statement or proposition is true, evidence provides decision-makers with a clearer idea of the possible impact and risks of their decisions and can support or negate a rationale for action. Decision-makers must look at the evidence available and ask the question: “in this instance, does the evidence available provide sufficient confidence to move forward?”


This might provide some discomfort for those of us who are big fans of evidence! It might mean that decisions are made on the back of evidence that we are not as confident in, but a decision is necessary as the other factors push you or your organisation in a particular direction. However, if we can at least be clear on what the factors are that have influenced a decision and the confidence that we have in the evidence base then I think in many cases that is a big step forward for organisations.


Our individual biases can cloud our decision making, so evidence should be used to check against them


There are many types of bias, both individual and collective, and each can mislead us. The table below sets out a variety of different biases that we can be subject to and again comes from NESTA’s guide.


A common problem I’ve often encountered is the “cherry picking” of evidence to support a bias. For evidence to have it’s maximum impact, we need to be aware of our biases, confront them and ruthlessly examine them, being prepared to move beyond our biases or fixed positions.


Setting out a clear hypothesis (or series of hypotheses) and then examining the extent to which the evidence supports or negates it (or them) is often an effective way of doing this (and takes some courage).







The level of evidence required should be proportionate to the decision being made


This might differ depending on the specific situation. The potential for harm and the cost of doing something are two factors to pay close attention to when considering if the evidence available provides enough confidence to take a decision.


Where there’s a high risk of harm if things don’t go as we expect, then decision-makers need much greater confidence in the evidence before taking a decision. For example, when making changes to the care people receive, there is a risk that the new way is less effective than the old and could even produce unexpected negative outcomes. In order to move forward with such changes, it would be appropriate to have evidence that provided a thorough understanding of current approaches, their shortcomings and the root causes of those issues. This would support an informed design of a new way of delivering care that could be tested and we would then need to be confident that the change did result in more positive outcomes before making any further decisions about the use of the new approach.


The same can also be said of cost. If I’m spending 50p on a chocolate bar then I don’t mind taking a risk with a new flavour. If I’m spending all the money I’ve ever saved in my life on a new house** then I want to know everything about the place. The same should be true of decisions within organisations. The larger the cost, the higher the level of confidence in the evidence base required.


Sometimes, things need to be tried on a small scale in order to build an evidence base, so it’s important that evidence isn’t used as a blocker that stifles innovation. In my time working in and with the public and third sectors, I’ve known of £5k decisions where decision makers were very demanding of the evidence base, requiring the highest rigour before investing a relatively small sum in a low-risk, low-cost intervention. I have also known of multi-million pound decisions being taken before an evaluation has even been commissioned. I’m sure colleagues working across the public and third sectors have witnessed this too. This is of course the wrong way round and we should recognise where we are with regard to the confidence required before taking a decision. The two by two diagram below provides a suggestion of the confidence that should be required of the evidence base, using the “potential for harm / cost” rule of thumb.




So to sum up:

  • In my experience there are differing interpretations about what the word “evidence” means. It’s often not until you have the conversation with colleagues about the definition of the word that you actually find out that your understandings differ.

  • I define evidence as a general term to cover the information, facts and insights that are drawn from research, data analysis and evaluation.

  • Evidence is useful because (or perhaps more appropriately - if) it helps us make better decisions

  • Evidence is just one component of good decisions, other factors are valuable too and we should not forget this

  • We should not seek absolutes where only approximate answers can be obtained. Absolute certainty from evidence is rare, we need to ask if we have sufficient evidence to move forward with our next decision.

  • Evidence should check against biases and not simply be used to confirm them. Using hypotheses to drive your research, analysis, evaluation is a really helpful way to do this

  • The level of confidence in the evidence required for a decision should be proportionate to the decision being made. Sweating the small stuff can be a barrier to innovation – sometimes you need to do something to gather evidence against it.


In my next blog I’ll go on to talk about the types of decisions we need evidence for before I move on later in the series to talk about what gives us confidence in the evidence base for those different decision types. In the meantime, let me know what you think of the above!


* It’s worth noting here that I think in general this guide is pretty good. It talks about horses for courses and choosing the right approach. However, it then pushes you to the fairly rigid, hierarchy type approach to standards of evidence – which I don’t completely agree with. In future blogs within this series I’ll set out why and I’ll also suggest an alternative.

** Alas, I’m still saving…


This blog series has been reproduced with permission from Dave Salisbury's personal blog . You can follow Dave for further updates here

  • Twitter

©2019 by Charity Evaluation Working Group. The Charity Evaluation Working Group is a CIOregistered in England and Wales ,Charity Number 1184808