If evidence-informed decision-making works in theory, why don’t we see institutionalisation of evidence use in practice?

By Laurenz Langer[1]

The theory of evidence-informed decision-making (EIDM) is beautiful, elegant, and convincing. But, while really helpful to guide thinking on EIDM, so far has proven rather unhelpful to guide the practice of using evidence, in particular its institutionalisation. I will offer here three key issues with the current theory of EIDM that I believe challenges its translation into practice.

In a nutshell, the theoretical key pillars of EIDM are fairly linear and well-grounded. It starts from the assumption that there is an ample amount of evidence (academic research, Grey literature, data, etc.) relevant to policy decision-making. It then assumes that, if decision-makers were to have access to this evidence, they would use it during policy design and implementation. Such use of evidence will then make policy design and implementation more effective and we all—citizens, communities, society—benefit.

Evidence-informed policies are assumed to be more effective and beneficial for four key reasons justifying the strive for EIDM. First, we don’t want to experiment on citizens exposing them to the risks of policies whose effects we don’t know. Evidence helps here better understanding the potential harm and benefits of policies better. Second, we don’t want to waste money on policies that turn out to be ineffective and not reach their intended outcomes. Evidence, in particular evaluation evidence, helps here teasing out which policies and programmes work and why. Third, we don’t want to waste public money on funding research that is not used to benefit the public through better informed policies. Fourth, we want policy-making to be as transparent and accountable as possible, and using evidence can help that process.

In short, EIDM theory urges us to use all the available knowledge to inform public policies—a no-brainer indeed.

Alas, numerous studies (e.g. here and here) and case experience after experience (see here and here)  show that it is really, really difficult to achieve this no-brainer in practice! I believe this breakdown in translating theory into practice arises from three key flaws in EIDM theory:

 

EIDM theory doesn’t pay enough attention to politics[2]

Any decision-making in the public policy sector is inherently political. There’s many competing factors already in the policy arena (interest groups, party manifestos, ideology, etc.) Evidence will have to compete with all of these for attention and if it doesn’t want to compete it is likely to be outsmarted and ignored. For EIDM practice, this means moving away from the theoretical ideal of evidence-based policy-making. We need to move towards a more realistic and messy version in which evidence informs some decisions and not other, and in which it is one of many factors influencing policy. Let’s give evidence a seat on the decision-making table and let it embrace the politics.

 

EIDM theory doesn’t get beyond relationships and interactions ‘matter’

EIDM theory and much research shows how important strong relationships, trust, and ongoing interactions between researchers and decision-makers are in order to foster evidence use (see here and here). But, for some reason, we are still stuck as seeing relationships as a mere barrier to be overcome when supporting evidence use. So, we organise our once-off seminars and annual conferences to get people into the same room once and assume we all become friends and happily use evidence ever after.

This seems a bit naïve and EIDM advocates need to become more scientific about building relationships and networks. Funding goes into large-scale capacity-building projects and new data visualisation approaches; and these activities get more innovative and effective by the month. Though, we don’t invest equally in deeply and thoroughly understanding how to build effective relationships and networks and how to make these sustainable. We rely on personalities to match and to get along. We rely on evidence champions to emerge. In essence, we rely on luck and chance to build evidence networks and relationships. I don’t think this is good enough if we are striving for an institutionalisation of evidence use. Rather, we need to develop a theory of change and design thinking around evidence networks and integrate relationships and trust fully into EIDM—not positioning it as something fluffy and intangible that can be left to chance and personality.

 

EIDM theory doesn’t start with the users of evidence

The biggest flaw of EIDM theory, in my mind, is that it doesn’t start with the users of evidence. It is astonishing that, as a research profession, we talk about decision-making and use of evidence, but very rarely start with the user herself. Most EIDM theory is drawn up by researchers and, unsurprisingly, designed from an outsiders’ perspective. Research models of EIDM talk about push and pull of evidence, supply and demand, but much more is written and debated about the push and supply side[3]. This seems to me the wrong place to start. Any theory of EIDM needs to focus on the demand and the pull for evidence first—the supply then will follow. 200 years of systematic research production seem to leave the idea that pushing and supplying more evidence as an effective approach to foster evidence use, a rather evidence-un-informed idea.

As a community of researchers and practitioners, we need to understand EIDM as at its core being about the behaviour change of decision-makers: the new behaviour being the use of evidence. This, then, comes with the implication that we need to look much more closely into behavioural science and the incentives needed for such behaviour change. There is a vast body of knowledge around changing behaviour including that of decision-makers, the UK’s Behavioural Insights Team arguably being the most famous example. More thinking is required to better understand public sector management and governance processes and structures and what makes these receptive for evidence use or not. Currently, decision-makers are largely developing these structures and processes by themselves learning from one-another rather than involving researchers. This seems a missed opportunity and EIDM theory should engage more closely with these diverse bodies of knowledge.

To be clear, we have many examples of successful and effective EIDM in practice, in particular here in South Africa. But, in order to achieve large-scale and institutionalised evidence use across contexts, I believe the above three key flaws—i.e. neglecting politics, not going deep enough into evidence networks, and starting at the wrong end of EIDM—need to be addressed for EIDM not to remain a pipe dream.

 

[1] This blog is based on a talk prepared for the panel ‘Evidence-informed implementation of the SDGs: challenges and opportunities’, Seedbeds of Transformation Conference, 9-11 May 2018, Port Elizabeth.

[2] Similar arguments are being made by Parkhurst, Carney, and Shaxson, for example.

[3] With some notable exceptions, e.g. Lavis and Newman.

By | 2018-05-11T12:41:26+00:00 May 11th, 2018|blog|0 Comments

Leave A Comment