Conducting evaluation during times of change: Lessons from policy and community responses to the pandemic in Scotland

This report reviews existing evidence from evaluations of Scottish Government COVID-19 measures, in order to develop key principles for embedding evaluation more systematically into policymaking during times of rapid change or disruption.


3. Theme 1 - Timing

Evaluations commenced at different time points relative to the course of the pandemic (see Fig. 1). Some were concurrent with the intervention they examined, while others took a retrospective look. This theme considers when evaluations were undertaken relative to the stages of the pandemic and to decisions on the measures they assess, and what difference this made to each evaluation's process and outputs.

3.1 Key findings

Rapid process evaluations carried out early in the pandemic were responsive to policy changes and were used to actively inform the delivery of interventions. Completion of the evaluation of the COVID-19 Shielding Programme (Scotland)[3] within the first year of the pandemic, for example, meant the programme's continued development was informed by findings of the programme's rapid setup, accurate targeting and resultant behaviour change, but also the challenges posed for some of those shielding.

Rapid evaluation mobilisation was also facilitated where research was already planned or underway and thus able to 'hit the ground running', as in the evaluations of the Universal Health Visiting Pathway (UHVP)[4] or the Extended Distress Brief Intervention Programme (EDBI)[5]. For instance, the evaluation during the pandemic of the Near Me video consulting service in Scotland[6] was able to draw on an established cohort of interviewees to gather further views under pandemic conditions, with ethical opinions already substantively in place, and had some pre-COVID data for comparison.

Rapid early evaluation nevertheless imposed some challenges. In the Shielding Programme study3, a framework of aims and questions for different programme aspects could not be treated as fixed from the outset of evaluation, since the relative importance of different questions were likely to change as the pandemic unfolded. Any conclusions reached during an ongoing pandemic represented a snapshot of a still-moving picture: the timing of a study of the impacts of COVID-19 on volunteering[7] for instance meant it could not take account of the subsequent impacts of the Omicron variant, while the evaluation of the EDBI programme noted that the changing nature of restrictions and concerns meant respondent views could have varied significantly between August and December 2020.

Having a short time-frame for evaluation limited scope for in-depth analysis or data sharing/linkage and meant that some research questions went unanswered (e.g. around impact). The report on the targeted community testing[8] programme recognised that learning towards some objectives had not been possible to achieve within the period of the evaluation. A report on perceived barriers to adherence with COVID-19 restrictions noted that the opportunity for more involved comparative analysis would have helped to understand differing impacts for different groups.[9]

Other evaluations began at a relatively late stage, taking a retrospective look at measures and their impact. Practical barriers enforced retrospective evaluation in some cases; for instance, repeated closures and re-openings delayed the ability of researchers to access care homes for the evaluation of the programme to connect residents.[10]

Coming later to evaluation sometimes offered benefits in terms of greater availability of data and access to it, as highlighted in the evaluation of the COVID-19 vaccine programme.[11] The review of the impact of COVID-19 on volunteering found that a later stage evaluation enabled a focus on those datasets that were more robust.

However delaying evaluation meant that accounts of experiences were shaped by a longer process of recollection. And while the state of flux during the pandemic was seen as problematic for early evaluations, a retrospective study of low income households carried out from mid-2021 still found that families' circumstances were changing rapidly.[12]

Extended evaluations had the potential to combine the advantages of early and retrospective evaluations. For instance, the two phases of the evaluation of business support measures meant continuation decisions in July 2020 were informed by interim findings on delivery and perceptions of schemes; while the final report two years later took advantage of secondary sources, published by then, to assess the impact on Scotland's economy.[13] The Scottish Welfare Fund review, while it covered the period of COVID-19, was also able to consider impact in the context of pre- and post-pandemic trends.[14]

Longer time frames also gave some scope to consider how process or impact changed during the pandemic. A study of compliance with self-isolation[15], carried out in waves, was able to consider how behaviours altered in response to changes in the intervention landscape over a period of time (albeit short). The evaluation of perinatal experiences during COVID-19 was able to examine these across an entire year of the pandemic.[16]

However, longer evaluation timeframes demand greater resources. Reviews offered a pragmatic means to synthesise findings from successive stages of a pandemic, but comparability of data from different time points could be problematic, as for instance reported in the review of impacts on volunteering. The rate of changes to measures or their delivery during the pandemic could create challenges for comparative or overarching analysis covering extended periods; for instance COVID restrictions changed during the period of evaluation of response measures in the civil courts.[17]

3.2 Implications for evaluating in times of rapid change

  • Rapid response evaluation is valuable to steering a course through a crisis. Advance planning of how to structure evaluation flexibly in response to an unstable or less clearly defined problem or context will support our capacity for rapid response; for instance, setting criteria for changing the research questions.
  • Following up initial or rapid evaluations with linked studies has proved valuable in addressing gaps in learning. Where an early evaluation is designed, consideration should be given to how it might be augmented over a longer term, for instance in terms of whether any groundwork for recruitment or data gathering can be laid for this at the earlier stage.
  • Where a short overall evaluation timeframe is seen as required, it may be felt that there is not enough time to address all questions of value. Building in an extended strand to conduct fuller analyses of data could address additional questions at a later stage.
  • The evaluation of the Shielding programme3 makes the policy recommendation of thinking through different scenarios, in advance and in detail, around how at-risk groups could be supported in future pandemic situations. In a similar way, more consideration, at the onset of crises, of different scenarios for how they will develop will help to plan diverse evaluation strategies to meet changing circumstances.
  • Recent or ongoing evaluations were found to be a valuable resource in overcoming the challenges to evaluation in the transformed circumstances of COVID-19. At the onset of a crisis, it will be valuable to survey what research is in place or prepared, to see how it might be extended, redeployed or repurposed to meet emergent evaluation needs.
  • Some studies found challenges in synthesising evidence from separate studies that took place at distinct stages of the pandemic, measured outcomes differently or adopted bespoke materials and datasets. At an early stage of rapid change, guidelines could be developed and shared across government and partners, to ensure that data from different evaluations carried out across that period are as compatible and comparable as possible and strengthen the potential for learning.

Contact

Email: OCSPA@gov.scot

Back to top