By Ruth Stewart, Director of the Africa Centre for Evidence

Introduction – Two challenges

For those of us who are working to ensure that research evidence is both useful and used, there are many challenges that we face. I want to highlight what I consider to be two of the key ones.

The first I like to describe as ‘the danger of single study’.[1] We know that individual research studies have the potential to do harm if considered alone. This potential for harm generally arises from flaws in the study design which mean that either the study simply does not find what it claims to have found (a lack of internal validity) or does not have the transferability to the wider world that we are hoping for (a lack of external validity).

There are unfortunately too many stories of how flawed individual studies have made it to publication, and into the media and our daily experience, causing harm to those who embrace the claims within them. One of the most famous is the research by British doctor, Andrew Wakefield, on the link (thoroughly unfounded) between the combined mumps, measles and rubella vaccination for young babies (known as MMR) and the onset of autism in young children[2]. Despite being struck off the medical register for his fraudulent 1998 research paper, and several systematic reviews of multiple studies that show there is no evidence for his claims[3], nearly 20 years later as a new mother in South Africa I was still warned by my midwife that there are concerns about this vaccine and I may not want by baby to receive it. Perhaps more alarming is the probable measles outbreak at my older son’s school, and the global increase in child deaths from the disease due to dropping vaccination rates caused by fears amongst parents and health professionals that perhaps this single research study might have been right. (It was not!).

In another highly contested piece of research in the 1990s by Pitt and Khandker, it was shown that small loans to poor women in Bangladesh could significantly reduce poverty levels.[4] In part based on this single study, which has been heavily critiqued by mathematicians who assert that it does not show that microcredit reduces poverty,[5] the microfinance industry has grown, gaining global recognition with the award of the 2006 Nobel Peace Prize to one of its greatest advocates, Muhammad Yunus. Despite systematic reviews that warn of the potential for harm caused by encouraging poor people to take on debt,[6] the industry continues to boom. Those of us who live in the global south are all too familiar with the crippling levels of debt due to multiple small loans that so many of those on the breadline struggle to balance. My personal experience, and the large body of evidence on microfinance, is far from glowing.

These examples tell stories of significant dangers of flawed research, and the significant impact they can have if they are accepted without sufficient critique. There are even more examples of studies that fail to communicate the full picture of what they find. We know that there is a strong tendency amongst researchers to report only favourable results, and to fail to tell us the fullness of what they have learnt with regards whether or not something works, is acceptable, affordable or relevant. We also know that research that finds that something works is much more likely to be published than something which finds it makes no difference or does harm, something known as publication bias.

This danger of a single study leads me to highlight two needs. First we need to have culture, systems, skills and tools for appraising individual studies: examining their design with a critical eye, interrogating their methods and questioning their analyses and conclusions. Second we need processes for considering the whole available evidence base on a particular issue, and not just the single study that is easiest to access, shouted most loudly from the roof tops, or that supports our view of the world.

The second big challenge, I believe, when trying to ensure that research evidence is both useful and used, is the tendency to isolate ourselves, and indeed worse than that, to build professional silos, walls, barriers and clubs, that prevent us from engaging in true systems thinking.[7] If we acknowledge that there are a large number of people with wide ranging expertise all of whom have roles to play in the generation and use of research knowledge (increasingly referred to as an ecosystem), then we need to work together. If different people within the evidence ecosystem do not talk to each other, or even know of each other, then how can the system work as one? By carving up the world into separate areas of knowledge and practice, we tend to build up our own importance, without realising the limitations we are setting for others and ourselves.

Two camps are often portrayed – the evidence-producers and the evidence-users. These are generally assumed to be university-based researchers and government-based decision-makers, but of course all of these labels carry assumptions and overly generalise (the danger of a single story emerging again). And then there are the ‘knowledge brokers’ or ‘translators’ who, in theory, improve communication across the ‘gap’.

But these are not the only divisions – there are geographical groups, too often with the Global North attempting to ‘teach’ the Global South what to do. There are sector differences – the government, the private, and the academic sectors rarely cross one another’s paths. And we have the usual disciplines or fields that rarely intersect except with the citizen for whom most of these barriers make little sense. We can go on.

Barriers are created by the assumptions we all make about one another, the judgements we bring to bear about who is more ‘right’, our patronising language about who needs their capacity ‘built’, and the deficit models too often imposed by those with power upon those with less power. This is not to say that there are not many well-meaning people working to try and tackle inequality and poverty through an increase in the use of evidence in decision-making. But the barriers that exist amongst these players can and do render many of their efforts impotent. I believe, that it is communities that hold the shared knowledge and power to enable change, and that unless we can learn to work together, we will not make the positive change that we seek.

Two potential solutions

These two challenges, the dangers of a single study and of isolation, have prompted me to spend much of my life working towards two solutions. The first solution aims to address, in ways that are both rigorous and pragmatic, the need for transparent, replicable and structured approaches for identifying, selecting, appraising and synthesising evidence, in particular research evidence. This solution builds on the systematic review methodology that has significantly increased the use of evidence in health. Most importantly, it looks for answers from across a body of critically appraised research evidence, avoiding the danger of a single study. The second solution aims to span boundaries in the evidence ecosystem, avoiding the danger of isolation by working by breaking down barriers between people. Indeed if we acknowledge that the evidence ecosystem is complex with multiple role-players, then only by engaging with one another can we enable that system to work. This relationship-centred solution builds on movements for user involvement in research, shared decision-making, and greater policy dialogue.

The broader context of evidence-informed decision-making

These two solutions have their origins in the broader context of evidence-informed decision-making, which in turn has its origins in the evidence-based health care. Evidence based medicine was advocated by clinicians who began to realise how little research was underlying much of modern medicine. Iain Chalmers famously declared that doctors’ medical training and manuals were so out of date that they were ‘killing people’. In 1993 he lead the foundation of the Cochrane Collaboration, which sought to collate, critique and synthesise the research base on medical interventions to provide useful and rigorous summaries of the best available evidence on what works. Their initial work was disseminated on a CD containing a ‘library of evidence on maternal and child health’. The Collaboration that they formed has grown steadily into the 37,000 people from 130 countries around the world who now contribute to systematic reviews to add to this ever-expanding library of evidence.  The use of this evidence in clinical practice became known as evidence based medicine: ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’[8]. In the 1990s systematic review methodology, and calls for the use of the evidence in decision-making, was extended beyond health first to health promotion and then education by the pioneering EPPI-Centre at UCL: Institute of Education. As more groups started to produce these forms of evidence synthesis for decision-making, so the approach spread to other sectors, including social work, crime prevention, and international development (I pick up on this expansion further in my next blog).

The spread of the methodology for drawing out lessons from the body of available research, has been combined by an expansion in approaches to promote the use of this evidence in decision-making: initially referred to evidence-based decision-making, and then, with an increasing recognition that decision-making is a complex process with many influences, the language has shifted to evidence-informed decision-making. The promotion of evidence in decision-making has now moved beyond the evidence synthesis methodology, and we see activities promoting the use of less systematic overviews of evidence, and indeed individual studies (a worrying trend). We see a range of activities, the majority are focussed on the production of research that is relevant to decision-making, and is disseminated in briefer and clearer ways. We are also now seeing small, but increasing, initiatives to engage more closely with decision-makers themselves, training decision-makers to make sense of research, supporting the commissioning of research by decision-makers, starting to shape decision-making systems to demand more research, and facilitating policy-dialogues for researchers and decision-makers to start to communicate with one another.


Having experienced the full range of initiatives to increase the use of evidence in decision-making mentioned above, I have seen again and again that relationships are always at the centre of change. Connections between people are what make a difference. Respect and trust are needed within this evidence ecosystem, a shift in power and understanding is needed, and new relationship-centred activities are required that build on the baby-steps we have already seen in this direction, go beyond these current activities and increase our potential for impact. It is communities not individuals that hold shared knowledge and power and have the scope to enable change.

I am increasingly convinced that for evidence-informed decision-making to genuinely contribute to the reduction of poverty and inequality, we need to combine the two solutions I laid out above: systematic and critical consideration of the full body of available evidence to avoid the danger of a single study, and the prioritisation of relationship-building and community-creating as the centre of efforts to support the use of this evidence to avoid the danger of isolation.

This blog is the first in a series that expands on the challenges we face to make evidence-informed decision-making a reality, exploring why new solutions are needed. It lays out the changes that are required, and last but not least, outlines what difference these can make. I hope you will indulge me with your readership as I unpack and explore these issues that are so close to my heart.

[1] If you haven’t yet watched the moving and influential TED talk by Nigerian author Chimamanda Ngozi Adichie about stereotypes and racism, then do pause now and watch her ‘danger of a single story’ here. She delivers her message with much more eloquence than I do, and you will get the gist of my message if you listen to her and apply just a little of what she says to research. (

[2] The paper has been retracted by the journal:

[3] See Demicheli and colleagues’ 2012 Cochrane Review here:

And Taylor and colleagues’ 2014 meta-analysis here:

[4] The original paper is available here:

[5] See an overview of this microcredit debate here:

[6] These include two reviews that I led, as well as reviews by others:

[7] Duncan Green’s book ‘How change happens’ talks in more depth on systems thinking:

[8] David Sacket (1996) ‘Evidence based medicine: what it is and what it isn’t’ BMJ, 312: 71-71