Blog

Evidence-Informed Guessing and Evidence-Informed Marketing

Spending time as a policymaker altered my view on how research can inform policy and, in particular, gave me an insight into the importance of guessing and marketing.

Authors

Image of Kirsty Newman

Kirsty Newman

Consultant

In around 2015, I was Head of the Evidence Into Action Team in DFID. I had spent years of my life focusing on evidence-informed policymaking; I had worked in the UK Parliament, established and ran a capacity building programme for use of evidence in African parliaments and then, in DFID, led an incredible team of around 20 staff to support the use of evidence for better policymaking through programming and technical support. I maintained a widely read blog and spoke at conferences on the topic of evidence and policymaking in development. And if you had looked at my CV at the time, you would have seen that I described myself as a thought-leader in the topic.

So, you can imagine my surprise when I switched jobs to become an actual policymaker and discovered that I knew nothing at all!

Okay, that is not quite true. Of course I knew stuff, and in fact I think I still agree with most of what I said and wrote on evidence and policymaking. But it was missing the deep understanding that only comes from actually being a policymaker. I should add, of course, that my experience—of being a civil servant in DFID’s policy division—itself gave me only one narrow view of policymaking. I subsequently moved to work in a multilateral organisation and once again had to reconfigure my understanding. And I am now working for a large research programme which aims to influence policy, and this has given me yet another different viewpoint.

So, what have I learnt that I did not know before? I have learnt about the reality of policy development and, in particular, about the key role that guessing and marketing play.

Evidence-informed guessing

When I spent my life focusing on evidence-informed policymaking, I was very preoccupied with synthesising what existing evidence could tell us and providing an indication to policymakers of the degree of confidence we had in different findings. So, for example, members of my team wrote papers on key topics such as corruption, economic growth, or civil service reform summarising the current state of the evidence. They summarised facts that were well supported by the evidence as well as questions on which research remained inconclusive. My team also thought a lot about ‘the demand side’—supporting policymakers to build capacity to access, appraise, and use evidence—but this work still focused on what actual evidence existed that could inform those decisions.

What I focused on less were the many decisions for which there is not a body of informative evidence. Once I became a policymaker, I became acutely aware that policymaking does not wait for the evidence to emerge and so, if there is no evidence available to inform them, policies will just be based on guesswork. Given this, I have come to the conclusion that sometimes the most valuable thing a researcher or research intermediary can offer a policymaker is an evidence-informed ‘guess’. For example, they may be able to offer predictions about how political economy factors might affect an attempt to scale up an intervention; ‘back of the envelope’ calculations about how effect sizes might translate into outcomes in concrete terms; or suggestions for important factors to consider in education sector reform extrapolated from the health systems research field.

A colleague of mine in OPM, Craig Bardsley, gave me a helpful way of thinking about this. He said that when you are trying to draw insights from diverse sources of research and coming up with the implications of these for complex topics (for example, education systems change), you need to use the most sophisticated synthesis tool available to mankind: the human brain. I am reminded that when the UK Parliamentary Office of Science and Technology (POST) develops their evidence synthesis products, they don’t start by gathering up all the relevant research outputs and trying to synthesise them. Instead, they identify the top researchers working in the field and ask them to summarise the field. POST recognises that these experts will have already done the synthesis in their heads and, where there is insufficient evidence to definitively say something, they may still be able to provide a best guess which is more likely to be accurate than a guess from a (much less informed) policymaker. In fact, to take this even further, I suspect that a network of human brains/individuals who are steeped in the available evidence and who are able to deliberate on its implications might be the optimal synthesis tool. I see this process happening on a day-to-day basis amongst members of the RISE family, and it is really quite remarkable to see how new insights crystalise through discussion.

Of course, I am not suggesting that researchers should start going around and offering up their opinions pretending that they are proven fact. But I do think that offering evidence-informed guesses (which are clearly indicated as such) and communicating insights distilled from deliberation amongst experts can be incredibly valuable to policymakers whose only alternative is to make their own guesses based on far less background knowledge.

Evidence-informed marketing

Before becoming a policymaker, I was unaware of the importance of ‘marketing’ when developing policy. Officials developing new policy positions are highly motivated by how easily they can ‘sell’ their work to seniors. They are constantly looking for ‘hooks’ that will get decision makers interested in their topic and stories that will make proposed policy positions easy to understand. Similarly, senior policymakers want to have a steady stream of ‘announcables’—new programming commitments linked to defined outcomes which they can drop into speeches to demonstrate that they are taking action.

This type of marketing horrifies many researchers who want to present facts in an objective manner. But the problem, once again, is that if people who understand the research do not think about these matters then uninformed people will drive decision-making purely based on what they can sell.

A good example of this is the huge focus on girls’ education that you see in our sector. Helping vulnerable girls go to school is a great hook that policymakers across the political spectrum can get behind. The fact that girls are mainly in school but not learning sometimes seems to be an inconvenient truth for policymakers who like a clean narrative. Similarly, the focus on EdTech as a silver bullet to education problems is a hook that is easy to sell. Policymakers like the idea that they can offer something that is modern, high-tech and less politically tricky than systems reform. Again, these policymakers are sometimes reluctant to hear evidence suggesting that EdTech is not the answer to poor quality teaching.

I think that researchers and research intermediaries can play a really important role by developing hooks and stories, and informing announcables, that are actually informed by evidence. In the education space this will mean finding ways to ‘sell’ a focus on foundational learning. We will need slogans, infographics and stories that policymakers can easily grasp. We will also need to offer advice on announcable programming that can actually deliver learning.

Conclusion

I have long advocated for the role of research intermediaries in bridging the gap between research and policy. These intermediaries can in some cases be researchers themselves but may also be allied staff in academic institutions or individuals in civil society organisations. However, over my time working in policymaking, my view on what these intermediaries should do has changed. I still think that work to synthesise and communicate bodies of evidence is important. But I think that going further to engage in evidence-informed guessing and evidence-informed marketing will make it more likely that the insights from research are actually transformed into information and outputs that can be used by policymakers.

RISE blog posts and podcasts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.