Evaluation and research in ‘real world’ policy settings

Professor Elliot Stern, Visiting Professor in School for Policy Studies

In a policy environment besotted with ‘evidence-based policy making’, evaluation is in vogue.  The promise of objective ‘facts, truth, and precision’ sounds like music to a policy maker’s ear. But it is a false promise. The multiple forces at play in the ‘real world’, the multiple lenses through which we can each see that world, and the multiple truths that come to bear on the creation, success and consequences of a given policy must all be constantly borne in mind. There are no easy answers.

At a recent event at the University of Bristol, Elliott Stern, editor of the academic journal ‘Evaluation’ and Matt Baumann, Principal Research Officer at the Department for Energy and Climate Change, explored the value of evaluation in the face of these many challenges.

Despite its many imperfections, evaluation is probably the best tool we have for assessing the impact of a policy intervention in the ‘real world’.

This blog is a summary of their presentations and the discussions that followed.

What is evaluation?

Through the process of evaluation one attempts to understand the results, effects, intended and unintended consequences of a specific action or policy intervention.  It is a commonly used tool in policy-making for assessing whether a certain policy has ‘worked’ – or at least done what it set out to do.

The role of evaluation

The role of evaluations is not clear-cut. Often, the evaluators see their role as ‘challenger’ – providing a constructive critique of policy implementation; and ‘improver’ – reflecting on what went well and what could be improved going forward. On the other hand, those who commission evaluations may regard them as ‘accountability mechanisms’, holding policy makers to account; or ‘additionality measures’, assessing the precise benefit of public spending or a certain government policy.

Why does it matter?

By using methods, often borrowed from the social sciences, evaluations can provide valuable insights into policy interventions: what impact they made; whether they met their stated aim; how future policy interventions can improve. There is much debate about the strengths and weaknesses of different methods and approaches in evaluation.

We live in a complex world in which multiple forces shape outcomes and multiple stakeholders have a say in shaping policy. It is almost impossible to declare a direct causal link between a policy intervention and an observed outcome. Evaluation is perhaps the best tool available to come close to measuring the impact of X policy on Y outcome.  Below, in the Case Study of evaluation in the real world it is possible to see how.

Case Study: Evaluation in the ‘Real World’

The Department for Energy and Climate Change (DECC) has the responsibility of ‘promot(ing) economic growth by delivering affordable, sustainable and secure energy to the UK, while driving ambitious action on climate change internationally’. This demands a range of complex, interrelated and simultaneous policy interventions.

As there is only a relatively small existing evidence base upon which to draw when determining what works in this field there is considerable uncertainty about how to overcome barriers.  So evaluation has a big role to play. According to Matt Baumann, Principal Research Officer, Strategic Analysis at DECC, ‘Evaluation is becoming a key part of project and programme delivery, helping to de-risk projects and provide the evidence to support realisation of the anticipated benefits.  It also provides crucial evidence of ‘what works’ to inform future policy direction. Through this Matt expects evaluation to make a significant contribution to DECC achieving its mission and meeting its ambitious targets.

Evaluation processes and evidence can be used to support policy appraisal (assessment of policies before they are implemented) as well as providing on-going formative evidence to support policy adaptation and refinement during implementation, and for summative evaluation that provides a comprehensive assessment of a policy’s processes and impacts. Through providing evidence about whether / how obstacles to successful policy impact are being overcome and what can be done to ensure success.

DECC is therefore developing a strong culture of evaluation across the department, and is actively planning or delivering (through commissioning external research) evaluations of many of its major policies.  But evaluation is not without its challenges. There is often no scope for large-scale policy pilots – as many of the Department’s policies require wholesale change to existing infrastructure pilots are not often feasible. Many policy interventions are being launched simultaneously which makes it difficult to isolate the impact of different policies.  And evaluation has walk the tightrope between the demands of longer term decision making and institutional knowledge (calling for rigour and comprehensiveness) and shorter term decision making (calling for a fleet-footed responsiveness in evaluation practice).

Matt is part of a small team in the centre of the organisation that works with policy teams and embedded analysts across DECC who are responsible for delivery of evaluations. The teams’ focus is on:

  • Making evaluations happen

Embedding evaluation planning in the policy making process, providing colleagues with support, training and assistance to scope, plan and budget for evaluations. Despite a strong case for evaluation, there are considerable calls on policy teams’ time and resources, and budgets are not committed without careful scrutiny.

  • Making evaluations good

Ensuring evaluations match the complexity of the policies they seek to evaluate. Many are using theory based approaches, involving mixed methods and multiple projects investigating different stakeholders’ responses and policy processes over time. Engaging the multiple stakeholders (including all the relevant policy teams and analytical disciplines) and starting the conversations about evaluation early – before the policy has been put in place can really help to make sure evaluations are well designed and high quality

  • Making evaluations count

Developing tools and strategies to support the use of evaluation in the design of policies, policy appraisals and wider decision-making. As the department’s policy evaluations begin to report the individual and cross cutting evaluation findings need to be synthesised and channelled in highly accessible formats to the right people in the right parts of the organisation at the right time.

You can view Matt Baumann’s presentation here: Evaluation in DECC.

This event was organised by PolicyBristol and the South West Doctoral Training Centre.