Networks Series 2: Production of useful evidence: the rapid expansion of the evidence-synthesis

By Ruth Stewart, Director of the Africa Centre for Evidence

Systematic reviews and the production of useful evidence

 I am often overwhelmed by the range of programmes and activities aiming to improve the quality of evidence and ensure it is summarized in useful ways to inform decisions. In my first blog  in this series I laid out two dangers that I believe befall all of us who would like to see an increase in the use of evidence for decision-making: the danger of a single study and the danger of isolation. Over the course of this series, I will be questioning whether current activities to increase the use of rigorous evidence in decision-making achieve or fall short of addressing these dangers and will make a case for a different (some might say new) way of thinking to ensure that evidence is both useful and used.

Whilst I intend through this blog series to build my case for this alternative way of thinking, and to propose mechanisms that might make a difference, I do not want to undermine the huge strengths that exist within current work. This blog acknowledges the range of activities for improving the quality of evidence, with particular focus on systematic reviews. It explores how the principles of systematic reviewing are being applied to increase the use of evidence in decision-making.[1]

An important methodology

 I firmly believe that the most significant shift in the production of useful evidence in my lifetime has been the development and expansion of systematic review methodology. Systematic reviews collate, critique and synthesise evidence from rigorous studies to provide transparent summaries of the evidence base. In most cases, although not all, they answer impact questions along the lines of ‘What is the impact of one intervention compared to another on a specified outcome in a particular population?’ From the initial Cochrane systematic reviews in the early 90s, we now have access to a library of over 5000 health reviews, updated routinely, with 37,000 reviewers from 130 countries contributing to their production. This single initiative has shaped health care policy in worldwide, and continues to develop methods, and improve communication of findings.

 New organisations and collaborations

 In 2000 the Campbell Collaboration was formed, in many ways a sister organization to Cochrane, although with significant differences. Campbell is a smaller organization and yet covers broad swathes of social policy, including crime, education and international development. And in environmental sciences, we have the Collaboration for Environmental Evidence (CEE), the newest of these three Collaborations focusing on biodiversity and environmental sustainability. In many ways these initiatives have built on one another, moving forwards with each new creation.

Smaller organizations with particular specialisms have also developed. These have included the EPPI-Centre, a leader in methodologies as far ranging as service-user engagement in reviews, to the use of artificial intelligence for screening and coding studies. They were one of the first organizations to take systematic reviews outside of medicine and into health promotion and education, and have been at the forefront of almost significant shift in the approach to new fields and in new methodological directions. Others, like the International Initiative for Impact Evaluation (3ie) have driven the use of systematic reviews in international development, leading the creation of the Campbell Collaboration’s International Development Coordinating Group and producing the most comprehensive databases available of both impact evaluations and systematic reviews in development. The Joanna Briggs Institute produces reviews in health care. The Alliance for Health Policy and Systems Research focuses on broader health systems issues. I could go on.

A growing community

 As the systematic review approach has expanded in terms of number of reviews, fields, and even methodology, so have the range of people involved, including reviewers, funders and potential users. I have had the privilege of being part of this expanded group of people, observing and even contributing to the growth of evidence synthesis as a field. I sat in on the EPPI-Centre’s first meetings about doing reviews in education. I completed DFID’s first tendered systematic review, on microfinance in sub-Saharan Africa. I have been part of the development of evidence mapping methodology through projects including 3ie, the Africa Centre for Evidence and the Department for Planning, Monitoring and Evaluation. I studied under and have worked for 20 years with Prof Sandy Oliver, a key leader in ensuring reviews are informed by service-users, and have been advised for many years by Beryl Leach, who works to ensure systematic reviews and evidence maps meet the needs of the review-users, including the decision-makers who shape those same services. I have the privilege of leading Africa’s only Centre of the Collaboration for Environmental Evidence, CEE Joburg, and being an editor for the Cochrane Consumers and Communication Review Group.

Innovation to ensure systematically reviewed evidence is ‘useful’

 Systematic review methodology has both adapted to new fields, and formed the underlying principles for new methods. Contrary to the assumptions of some, systematic reviews do not now always answer impact (what works?) questions or require quantitative data or meta-analysis. They increasingly incorporate non-randomised studies, answer questions about citizen’s experiences, and include reports of any kind (not only those published in peer reviewed journals). These things are often misunderstood.

Over time, technology has been increasingly used to enable international collaborations, speed up review processes, reduce the potential for human error, and minimize duplication of efforts. There’s a great summary of Cochrane’s work in this area here.

All of the systematic reviews that I have been part of have been guided by multi-disciplinary advisory groups established to share their expertise of the methods and topic area. As reviews have been conducted in new fields the membership of these advisory groups has expanded. The geographical reach of these groups has also broadened. No longer are advisory group meetings held in London with perhaps one or two members traveling from another city in England. Now our meetings take place online with international contributors. Greater engagement with a greater range of stakeholders has also contributed to earlier and more extensive communication about reviews, their existence, their methods and their findings. This has been driven in many ways by funders who increasingly require review teams to think from the outset about how they will engage with their policy audiences.

Addressing different questions, involving different people, and reaching further around the world have all contributed to the generation of more ‘policy-relevant’ systematic reviews. As my colleagues from the EPPI-Centre observe in their recent paper, it is ‘the mutual engagement across the research-policy interface that made reviews policy-relevant’.

As reviewers (within my own team and others) have started to work more closely with decision-makers, we have increasingly built on systematic review principles to produce other summaries of available evidence that are systematic, transparent and rigorous. These are tailored to address particular needs of decision-makers and include doing faster more focused reviews (sometimes addressing narrower questions or smaller geographical scopes), and producing evidence maps. My colleague Laurenz Langer has written more about these maps here.

 Concluding thoughts

 This blog is in many ways a tribute to the introduction of systematic review methodology and the significant shifts in the field over the last twenty years, to ensure the production of useful evidence for decision-making. Whilst the world does not sit still, and new developments continue to emerge, it is credit to the tens of thousands of researchers worldwide that we have growing libraries of ‘public good’ systematic reviews on an increasing range of issues to inform decisions. I am privileged to have been witness to, and even, in very small ways, to contribute to these developments.

 I none-the-less recall how in 2010 I had the opportunity to move from academia and evidence production, one step closer to government through a placement with the UK’s National Audit Office. Spending 12 months as a ‘visitor’, and then a couple more years as an employee with their value-for-money methods team, my understanding of the world was turned on its head. It turned out, that even with my thorough grounding in stakeholder engagement, research communication, what constitutes rigour in research, and even the use of evidence in policy development, I knew nothing of government priorities or ways of working, the importance of costs and value, or how to communicate with civil servants. I had to rethink how I dressed, how I wrote and spoke, and what kinds of things are important in terms of useful evidence.

 That year I attended the combined Cochrane and Campbell Colloquium in Keystone, Colorado, and every time people spoke about the use of systematic reviews by decision-makers I hung my head. It seemed to me that this community dedicated to the production of useful evidence knew too little of the use of evidence, of decision-makers themselves or the decision-making processes. Since then I have come to understand better what these extraordinary evidence organizations have achieved and continue to achieve, yet that nagging feeling still arises whenever evidence producers talk assertively about how evidence should be used. Something is not quite right….

In my next blog, I explore in more detail initiatives to increase the use of evidence in decision-making, including worthy activities conducted by both Cochrane and the Campbell Collaboration, and researcher-into-government placements like my own, and how this range of initiatives contribute to increasing the use of evidence in decision-making.

[1] Whilst not the focus of this blog, I want to acknowledge that there are very impressive drives to produce better single studies, with improved systems for the production of monitoring and evaluation data, the conduct of evaluations including the use of rigorous impact methodologies, and the generation and use of data and statistics. Without the generation of good primary data and studies, systematic reviews, evidence syntheses and maps would be redundant.