By Ruth Stewart, Director of the Africa Centre for Evidence
I started this blog series making the case for why we need to make relationships central to efforts to increasing the use of evidence in decision-making. I made a case for evidence-informed decision-making (Blog 1), and explained the importance of evidence-synthesis within this (Blog 2). I then went on to summarise what we know about how to facilitate evidence use (Blog 3). In the second part of this series, I have detailed three mechanisms through which relationships can be central (the how). I considered better understanding (Blog 4), better capacity (Blog 5) and better networks (Blog 6). Next in this series, I turn to the question of so what? and explore the difference it makes to focus on relationships both in terms of the practice of evidence synthesis (this blog), and in terms of evidence-informed decision-making itself (still coming).
So how is evidence synthesis shifting as people start to engage with one another and work in closer collaboration? As noted previously, evidence synthesis has expanded beyond health and into a broad range of fields including education, crime and justice, and international development. My team and I have recently provided support in conducting evidence maps and synthesis to colleagues working on taxation, water scarcity, and the management of endangered species. I include here personal experiences, and those of my immediate colleagues, on how we have observed the evidence synthesis methodology shifting as we have started to work closer together with others.
First, we have seen a shift in how we understand what is useful evidence. Where previously we have prioritised methodological rigour, and have seen clear communication of findings as our main obligation towards promoting use of that rigorously put together evidence, now we aim to let the decision-makers with whom we work tell us what would be useful. This shift in power has come about through the result of a) recognising that we know very little about decision-makers, their process, priorities or potential, and b) learning just a little more about how decisions are made, whether in the planning, implementation or evaluation stages of the policy cycle. As our investment in building both networks and relationships has led to better informed and more trusting collaborations, so we have been able to engage with decision-makers to understand the frameworks that they are working within, what questions they need answering, their timeframes, and perhaps most important, what they consider to be trustworthy evidence. We have written a paper about this, which lays out how stakeholder engagement in evidence synthesis has led us to reconsider what constitutes ‘rigour’. You can read more here. Our argument has not always been popular, but part of what we assert is that even the most beautifully put together piece of evidence synthesis is not useful unless it is timely. Furthermore, when setting inclusion criteria for a recent evidence map, we learnt that our academic logic around which geographical zones to include was not consistent with our government colleagues’ priorities; for them, particular countries, and even particular reports, needed to be included to give the evidence map political expediency and public legitimacy. We have over time, and through the establishment of strong relationships, reached the conclusion that it is actually only the users of research who can tell us what is useful evidence.
Second, the more we have taken time to understand from others the processes by which evidence becomes part of decisions, the more we have shifted how we put together that evidence. For example, if the global Sustainable Development Goals form the macro-structure for decision-making with regards policies and budgets across governments in Africa, then we need to reflect these Goals in the research frameworks that we use to structure our evidence synthesis. This is particularly true when developing coding and analysis frameworks for our evidence synthesis. As we have started to learn who is involved in decisions and at what levels, we have become better at working out who we should communicate with, when and how. It was only after a number of blunders, that we realised that protocol is paramount when approaching senior colleagues in government, a lesson which no doubt transfers to any sector in which you are not already familiar. And then of course, there are issues of planning cycles and timeliness of research evidence. Being able to deliver findings according to decision-making cycles is paramount in ensuring that the research is used.
Third, we have seen shifts in the ownership of evidence. As we have developed stronger relationships with a wide range of public servants in South Africa and across the continent, we have found that we increasingly share ownership of the evidence generated with them. This shared ownership ranges from work conducted solely by us, with advice from decision-makers, to co-produced evidence syntheses with most of the ownership in the hands of the decision-makers themselves. In all cases, the networks and relationships that we have built have enabled us to firstly acknowledge fully that we are not the owners of the decisions. I sometimes come across academics who have conducted research with clear-cut findings who seem to feel that they have a right to tell decision-makers what the ‘right’ course of action is. They want to own both the research and the decision. And then we have come to understand more and more that we are often not the owners of the research either. Instead, through partnership and sharing of power in relation to the research itself, we have found that government colleagues have taken ownership of the findings and their communication. I most definitely wouldn’t ever be invited to Cabinet to discuss the findings of my research, but my government partners have been invited to share the findings of our research at the highest levels. Through partnership, the potential for change is greatly increased.
The examples above from our direct experience are echoed in some of the methodical developments in the field more generally, even if to a more gentle extent. We have seen the inclusion of a wider range of evidence within evidence maps and syntheses. There is greater variety in the critical appraisal tools that are required, developed and tested in the field. This is a direct result of the inclusion of research with more varied methodologies that need appraising. The expectation for stakeholder engagement has grown, and extended to include new groups of stakeholders, with greater emphasis on decision-makers at different levels. We have also seen greater value put on evidence maps, once seen simply as an intermediate phase in the systematic review process, and now promoted as a valuable product in and of themselves. Last but not least, as the production of evidence has shifted, so has the communication. As more people are involved in production of evidence, so the communication styles and channels have broadened. Academic papers are, in these contexts, secondary products and not considered to be primary outputs by any means. As the portrayal of both the processes and products of research are shifting in these ways, so new groups of people become involved. By focusing on networks and relationships, evidence synthesis itself changes, with knock on effects on the people, processes and products of research.