Conducting evaluation during times of change: Lessons from policy and community responses to the pandemic in Scotland

This report reviews existing evidence from evaluations of Scottish Government COVID-19 measures, in order to develop key principles for embedding evaluation more systematically into policymaking during times of rapid change or disruption.


1. Executive Summary

Introduction

In November 2021 the Deputy First Minister took the decision to convene a Coronavirus (COVID-19) Learning and Evaluation Oversight Group to inform Scotland's recovery from COVID-19. The group ran from February 2022 to February 2024. The group was Chaired by Professor Linda Bauld, Chief Social Policy Adviser and included several Scottish Government Directors and senior partners from a wide range of public, third sector and research organisations. The group's aims included synthesising evaluation evidence across policy and community responses to the pandemic (without a specific focus on NHS interventions or clinical studies) to distil learning for approaches to recovery. It also sought to identify key evidence gaps and advise on how these may be addressed. As part of this work, a subgroup was convened to review evidence from evaluations of measures introduced in response to COVID-19 to develop key principles for embedding evaluation more systematically into policy making during times of rapid change or disruption.

This report looks at four key themes:

1) Timing: Evaluations were undertaken at different time points relative to the course of the pandemic. Some were concurrent with the intervention they examined, while others took a retrospective look.

2) Equalities: The pandemic is known to have increased inequality for many marginalised groups, and there was seen to be scope to learn from studies that considered equality of outcomes or experiences

3) Emergency Support Measures: The pandemic saw an unusually swift mobilisation of resources in response to emergent needs, creating an opportunity to learn how rapid policy development and deployment can be evaluated.

4) Evaluation Design: This sought to better understand the types of evaluation that were commissioned during the pandemic, why some types of evaluation were more common than others and whether there were gaps in relation to particular types of evaluation.

Context

In reflecting on evaluation during the COVID-19 pandemic it is important to recognise that COVID interventions were often introduced at extreme pace and under intense pressure. Given the policy context there were some impressive early examples of evaluation which informed subsequent policy delivery. However, this work has also identified a number of ways through which the Scottish Government's approach to evaluation can be further strengthened and improved. Further work will be required to turn the findings within this report into a plan for improvement.

Findings

  • Rapid process evaluations carried out early in the pandemic were designed to be responsive to policy or priority changes and were used to actively inform the delivery of interventions (such as the shielding programme)
  • However, not all evaluations were able to take account of shifting priorities and circumstances, and given the speed at which early evaluations were undertaken it was not always possible to successfully share or link data, or conduct detailed in-depth analysis.
  • Evaluations already planned or underway prior to the pandemic were able to 'hit the ground running' and re-focus research questions to consider how processes or impacts associated with interventions changed during the pandemic.
  • Evaluations from later in the pandemic found that data from different time points were not always compatible, and people's recollections of experiences may have been shaped or become harder to access with the passage of time.
  • Evaluations rarely included detailed quantitative analysis for equality groups; the only breakdowns were based on area deprivation (SIMD), sex or age. However, some qualitative work provided access to the views and experiences of equality groups in contrast with wider populations.
  • Reasons given for not including equality analysis were either not achieving sufficient responses, lack of resource, or project constraints such as reliance on available administrative data or time frames for reporting.
  • Evaluations of emergency funding measures were largely process evaluations relying on 'light touch' approaches that supported rapid delivery by partners, but not impact evaluation.
  • Outcomes data were achieved in some evaluations by linking or sharing existing datasets, or implementing multiple engagement strategies.
  • Qualitative research was able to capture perceived impacts of emergency funding measures to some extent.
  • The evaluations reviewed were predominantly mixed methods and generally focused on process.
  • There was a significant lack of economic or value for money analysis in the evaluations identified by the review.
  • Evaluation design choices were often seen to be constrained by practical barriers, by a reliance on administrative data not designed for evaluation, or by limited resources available.
  • Design choices often seemed pragmatic, based on what was seen to be readily accessible or scoped according to limited resources.

Implications

In times of stability:

  • plan how evaluations can be flexibly structured to meet different trajectories in any future disruptions.
  • invest in the collection of high quality administrative data to support disaggregation, building on the work already underway on the cross-public sector Equality Data Improvement Programme.
  • plan and resource evaluation strategies in partnership with key organisations for marginalised groups.
  • consider what proportion of spend in an emergency is appropriate to allocate for evaluation.
  • develop practical approaches for capturing impact and value for money in periods of rapid change, for instance an easily integrable toolkit.

At the onset of rapid change or disruption such as the COVID-19 pandemic:

  • articulate the expected outcomes of response measures and the knowledge that policy leads will require (e.g. around process/impact/value for money) as they are being developed, to help tailor evaluation approaches.
  • formulate and resource diverse strategies for evaluation to capture process, impact and cost benefit.
  • develop and share design guidelines to support data sharing across evaluation activity.
  • review how existing evaluations might be extended, redeployed or repurposed, or how evaluations of multiple measures might be integrated.
  • build a simple impact evaluation approach into 'light touch' monitoring from the start of policy delivery.
  • identify equality groups most likely to be negatively affected by each intervention; commission and resource evaluations to provide learning on ways to improve the design and delivery of interventions for these groups.
  • Support delivery organisations that do not have a system to monitor impacts to achieve this through sharing/linking data, learning or partnership input.

Later in a crisis, or afterwards:

  • consider commissioning post-hoc qualitative research to maximise understanding of outcomes.
  • conduct a thematic analysis drawing out key themes and learning from across multiple evaluations to understand broader lessons relating to policy design and delivery.

Contact

Email: OCSPA@gov.scot

Back to top