Early Adopter Communities: Evaluability Assessment

This report presents the findings of an evaluability assessment for the school age childcare Early Adopter Communities. This includes considerations and recommendations for process, impact, and economic evaluations.


3. Impact evaluation

This chapter explores how the priority outcomes could be measured and an appraisal of evaluation designs for assessing the impact of EACs. It sets out both optimal and more cost-effective approaches, and the reasoning behind these assessments. The assessment was based on drawing together information from a review of policy- and project-level documentation, the theory of change workshops, and interviews and discussions with EAC leads and the Scottish Government.

Impact evaluation aims

The overarching aim for an impact evaluation is to explore the extent to which the EACs have resulted in their intended outcomes. Given the number of outcomes identified in the theories of change, and the fact that these occur at different levels (system, community, parent, and child), it is recommended that the impact evaluation focus predominantly on the key causal pathways included in Chapter 2 and the priority outcomes in Table 3.1.

Proposed evaluation questions

The key questions an impact evaluation should seek to answer are set out below. These were developed with the causal pathways and priority outcomes in mind, as well as the research questions included in the Child Poverty – Monitoring and Evaluation: Policy Evaluation Framework. For example, it includes questions to explore system-level outcomes and the impact on child poverty drivers. An important step prior to commissioning will be ensuring consensus on the questions.

1. Has the investment in EACs contributed to an improved system of school age childcare in these local areas?

a. Has it contributed to an improved understanding of how to deliver high quality childcare and improved childcare policy?

b. Has it contributed to an enhanced childcare workforce, in terms of capacity, diversity and skills?

c. Has it contributed to better partnership working in the childcare sector?

2. Has the investment in EACs contributed to more families (particularly in the six priority groups) accessing school age childcare?

a. How far did new provision benefit existing childcare users rather than those not already making use of existing childcare provision?

b. What features of the childcare offer influence this e.g. affordability, flexibility, place-based and/or person-centred elements?

3. Has accessing school age childcare contributed to tackling child poverty drivers for parents in EACs?

a. Has it contributed to reduced cost of living for families?

b. Has it contributed to progression in employment and increased income (i.e. more parents prepare for, start, sustain or increase hours of work)?

4. Has accessing school age childcare contributed to improved outcomes for children in terms of their physical, mental and social wellbeing?

5. Has accessing school age childcare contributed to improved parental wellbeing?

6. In what ways do outcomes vary depending on contextual factors, including location, provision type, and family characteristics?

7. Have the EACs contributed to any negative consequences or outcomes (for systems, communities, or families)?

Impact evaluability assessment

To examine how best a future evaluation can answer these questions, the impact evaluability assessment involved three stages: 1) confirming that EACs met preliminary thresholds for future impact evaluation; 2) reviewing data sources for measuring priority outcomes; and 3) assessing the feasibility of impact evaluation methods and designs including experimental and quasi-experimental designs (QED), and theory-based methods.

Stage 1: Preliminary considerations

Drawing on the checklists set out in Peermans et al. (2015), we concluded that the EACs met preliminary thresholds that warrant a future impact evaluation if feasible (discussed further in stage 2 and 3). This was for the following reasons:

  • There is a clear rationale for the EACs based on evidence of a) negative outcomes associated with child poverty and b) positive outcomes associated with families accessing school age childcare.
  • There is a defined target population based within specific local areas and EACs report applying this with good levels of consistency.
  • A detailed theory of change has been developed, as well as key causal linkages leading to priority outcomes, which would benefit from being tested. Some further work on determining thresholds for success within specific timeframes is required, but this does not preclude an impact evaluation.
  • The EACs are using the investment in different ways, which introduces challenges around implementation fidelity, i.e. the extent to which the programme is delivered as intended, and whether this should be assessed at a local or national level. However, while it is not appropriate to assess fidelity to a specific implementation model, it could be assessed in relation to delivering core components.
  • There are existing and potential data sources for measuring outcomes. While there are some issues with this data (discussed in stage 2 below), this does not preclude an impact evaluation. However, it does inform the feasibility of different approaches discussed later in this chapter. Overall, there are some promising opportunities to improve the monitoring and evaluation data collected by EACs and to add further primary data via an external evaluator.

There are also several overarching considerations discussed below that need to be taken into account when planning an impact evaluation of the EACs.

  • Attributing impact in a complex environment: EACs are only one element of broader efforts to tackle child poverty. For example, there are Child Poverty Pathfinder projects in Dundee and Glasgow and a whole family wellbeing approach supported by the Social Innovation Partnership in Clackmannanshire. It is likely that these projects are reaching some of the same families as the EACs. This introduces challenges for attributing any outcomes observed as part of an impact evaluation (e.g. reduced poverty) specifically to EACs. In other words, it will be difficult to design an impact evaluation able to isolate the effects of the investment in EACs. For example, designing an evaluation that actively sought to limit families from accessing other interventions may raise ethical concerns.
  • Level of evaluation: There are potentially two options for conducting an impact evaluation of the EACs. One approach could be taken to evaluate all four (or more) EACs together. Alternatively, each EAC could be evaluated separately. The latter would allow more tailoring of the evaluation approach, and it would still be possible to apply synthesis methods to bring the findings together at an overall level. Local evaluations could also help understand how different approaches might impact outcomes locally. However, a key drawback to this approach is that it could be more challenging to estimate the impact of EACs as a whole if projects are evaluated in notably different ways. Furthermore, evaluating the EACs separately reduces sample sizes, which is a key determinant of evaluation feasibility. Overall, we would recommend evaluating the EACs together. As well as the practical factors mentioned, this approach would be capable of evaluating Scottish Government investment as a whole, rather than evaluating a specific model of childcare. As all the EACs follow a similar theory of change, this would allow for exploration and comparison of what works best in different contexts.
  • Level of outcome measurement: Some EAC outcomes can be measured at either the household/individual-level or the area-level. Both have challenges. For household/individual-level data, there are challenges gathering this information directly from families, as well as getting the required data sharing permissions to use secondary datasets. For area-level data, this is often at a local authority level and therefore unlikely to be capable of detecting an effect within specific EAC areas. Furthermore, area-level data would be more affected by the attribution issues noted above.
  • Timeframes for evaluation: At present, four EACs are already established and two further EACs in Fife and Shetland will also be funded until 2026. Beyond this, it is not currently known at what pace or scale EACs will be rolled out, or over what specific timeframe. The current evaluability assessment is therefore focused on conducting an impact evaluation of the next phase of funding of the EACs, within the next two years. This has implications for the potential options for an impact evaluation because the scale of delivery means smaller sample sizes. It is recommended that future monitoring and evaluation make use of the opportunities presented by two new EACs, including potential to capture baseline data, set up key indicators, and increase the overall sample sizes. Looking further into the future (beyond two years), there may be more options for impact evaluation, especially if there are additional EACs rolled out, which is discussed in stage 3. As evaluation of the next phase of funding of the EACs will not be able to capture longer term impacts, additional scoping may be required for any further evaluation once there is more information about plans for EACs in the longer term.

Stage 2: Assessment of data sources

In order to focus future evaluation, and to minimise the burden placed on EAC staff (to collect data) and families (to take part via surveys/interviews), we worked with the Scottish Government to select priority outcomes. This was informed by: the Scottish Government’s prioritisation of the three high-level outcomes in the School Age Childcare Delivery Framework and the short/medium term EAC outcomes that were considered most aligned to these; developing causal pathways to illustrate the mechanisms underpinning how EAC inputs and activities are expected to lead to these outcomes; and the availability/feasibility of data collection.

Priority short/medium term outcomes are shown in Table 3.1 along with potential data sources for their measurement, which are discussed below. Non-priority outcomes, and those pertaining to specific EACs, are not covered in this chapter.

Table 3.1: Mapping of outcomes to data sources
Outcome category Priority outcome EACs to collect [current data] Secondary data [current data] Family surveys [potential data source] Parent/child interviews [potential data source] Stake-holder interviews [potential data source]
System Improved understanding of how to deliver high quality school age childcare and continued improvement of EAC policy
Family (parent) Parents have more time to prepare for/start/ sustain/increase hours of work or study
Family (parent) Reduced cost of living
Family (parent) Income from employment maximised
Family (parent) Reduced financial pressure
Family (parent) More families engaged with support services
Family (parent) Increased parental respite
Family (parent) Improved parental wellbeing (fewer crisis points)
Family (child) Children have increased social connections
Family (child) Children have increased opportunities for learning and new experiences
Family (child) Children have more opportunities to be physically active
Family (child) Children have access to nutritious food
Family (child) Improved child social, emotional and behavioural development
Family (child) Improved child health and wellbeing

Current and potential outcome data collected by EACs

Application data

The current EACs collect some information on financial circumstances of families, including employment status, income, and receipt of benefits, primarily via applications. This has the potential to be used for outcome measurement if application/registration processes can be streamlined so EACs collect the same data, including baseline measures which are updated periodically (e.g. by enhancing existing feedback mechanisms). However, based on discussions with EAC staff and families, this approach poses several practical challenges:

  • There are multiple providers using different registration approaches in Clackmannanshire and Glasgow.
  • For parents not currently employment, it can be difficult to observe and measure progression towards this and it may be a longer-term outcome.
  • There are sensitivities around asking families for detailed income information as part of the application process (or soon after) when the relationships for engaging families in this way are at an early stage.
  • Families are not always able to provide accurate assessments of their income or may not feel comfortable doing so for fear of being ineligible for the provision or losing benefits.
  • Families may not always be referred directly from the EAC to employability programmes, making it difficult to track. EACs would be unlikely to follow families’ journeys through services. There was also a concern raised by an ACF project that it is intrusive to enquire if families take up offers of support while they are in the middle of a difficult situation.
  • EACs would also need to establish a mechanism for updating this data at regular intervals, e.g. every six months. At present, this information is sometimes collected at one timepoint only (i.e. application).

There are also some limitations and gaps in the data. In particular, data is not consistently updated over time and income is only collected in categories, which vary across EACs. For example, in Dundee, parents are asked when applying if their income is below £26,000, whereas in Inverclyde they are asked if their earnings are below £24,421 (for one child), below £26,884 (for two or more children), or more than this. In Clackmannanshire, parents are asked in a feedback survey if their household income is below £450 per week, or below £550 per week.

Feedback data

The current EACs also request feedback forms from families on an ongoing basis, which sometimes incorporate questions that relate to processes and outcomes (see Appendix C). However, there are multiple issues that limit the usefulness of this data and are not easily resolved. These issues include:

  • outcomes covered (not all priority outcomes are collected by all EACs)
  • question wording (different questions are used to measure outcomes, including the use of open-ended outcome questions only in some areas)
  • timing and frequency of surveys (currently not consistently collected when families first start accessing provision or at regular intervals)
  • response rates (feedback forms are not consistently completed by families, limiting opportunities to track outcomes over time)
  • anonymity (feedback forms are often anonymous and are intended primarily to inform service design and not to track outcomes at the family level)

These issues are interlinked. For instance, since most EACs do not collect baseline measures for their feedback forms or track feedback from specific families, questions typically ask families to retrospectively record progress against outcomes. This is not a robust measure as it relies heavily on recall and self-reporting.

These issues limit the usefulness of this data as a way of measuring outcomes. However, there is potential for feedback processes to be improved. Firstly, the questions could be streamlined and made more consistent across areas. To facilitate alignment with other similar initiatives, indicators from the Scottish Government Child Poverty monitoring and evaluation framework could be used (for example, collecting income in a consistent way). EACs demonstrated a willingness for including standardised measures (such as the Warwick Edinburgh Mental Wellbeing Scale which would detect any changes in parental wellbeing over time) if there was clear value in doing so. Additionally, it would be relatively straightforward for an external evaluator to design a set of questions which accurately measure priority outcomes and which all EACs could include in their own individual surveys.

Secondly, baseline measures would need to be established for families as there is little value in using standardised measures or, indeed, any questions asking about parents’ current status in relation to a particular outcome, at a given point in time (e.g. post only), unless there is a baseline available or a comparison group.

Data collection challenges

Considerations for data collection were explored in interviews to inform the M&E framework in December 2023 and January 2024 among EAC project leads and other EAC staff with a responsibility for data collection, as well as professionals involved with running ACF projects or establishing future EACs in Fife and Shetland. Experiences of data collection were also touched on during interviews with families, EAC staff and partners in March 2024, as part of early process evaluation fieldwork. Various challenges and enablers to ongoing data collection by EACs were identified. These factors are important for understanding the feasibility of future data collection and are detailed below.

Data quality and gaps/reliance on partner providers

Due to the differences between the setup of each EAC, there are some data inconsistencies and gaps. Where established partner providers have their own approaches to data collection, this has led to some gaps or data that cannot be combined. Where EACs are working with a high number of providers, this emerged as a particular issue. In Glasgow, existing approaches were mentioned as a possible barrier to collecting any more outcome/ monitoring data than they are currently. Some evidence from process evaluation interviews with providers in Glasgow backed this up, and there was a view that being part of the EAC had already created additional work. In addition, Glasgow have had to ask permission from providers to visit services to engage with families directly.

Changing this would require bringing partners together to agree a new process (although EAC leads expected that this could be challenging). For future partners, clear data collection requirements will need to be agreed on from the outset. Since EAC services were often reliant on partners and providers to provide them with outcome/ monitoring data, taking time to develop strong relationships with these organisations is important. There was a sense that more co-design with partners could also help with this, as well as well-timed data requests which gave partners enough time to plan and manage their workload. In future, Glasgow plan to incorporate permission for EAC staff to visit services for evaluation purposes into service-level agreements.

Manual data collection processes

EACs have been recording outcome/ monitoring data manually, typically in Microsoft Excel. This process was thought to be time consuming and could make it onerous to meet Scottish Government requirements. There was a view that manual data collection would not be feasible if EAC services were to significantly scale up.

There was some interest among project leads in using specialist data collection tools/systems in the future, such as Outcome Star tools which are used by some other local services in Inverclyde and Dundee.

Evaluation skills

There was some uncertainty within EACs about how to conduct certain elements of self-evaluation. In particular, how to measure system-change activities was mentioned, as well as how best to ask sensitive questions (e.g. those relating to financial circumstances), in order not to put parents/carers off answering questions or filling out the forms.

While some projects were working with external organisations providing support and guidance on this, EAC staff (and potentially partners) may benefit from additional guidance or training on data collection and evaluation. In particular, questionnaire design support may help to make EAC data collection forms more methodologically robust. Some techniques that projects had already found to be effective included “drip feeding” data collection (e.g. having the project lead visit services once or twice a month to talk to the children and parents) and depersonalised questions (e.g. using case studies: “Claire is a lone parent, what does she need?”).

Data sharing

There had been some challenges with data sharing, for example understanding the GDPR requirements when sharing data with partners (and vice versa). Some partners had been particularly hesitant to share data, or to sign data sharing agreements, due to fears of the legal implications of making a mistake. Inverclyde anticipated that getting the information that they would like to request from schools (e.g. learning needs) would be difficult due to issues around data ownership.

In Glasgow, providers’ concerns around data sharing agreements were alleviated following extensive communication and reassurance by the project lead.

Sensitivity around engaging families

There were concerns about asking families for sensitive information either as part of application processes or feedback forms. Financial information could cause particular anxiety among parents, due to potential implications on benefit entitlements. Furthermore, it was noted that families can find it difficult to provide an accurate assessment of their income.

There was also a general concern about making families feel like research participants, and about making families feel like the offer of support is conditional on providing sensitive information or taking part in feedback or co-design activities. This may lead to parents feeling obliged to share information that they would prefer not to. Indeed, in order for family data such as income and employment status to be used in evaluating the impact of the EACs, it would be required to be updated at regular intervals, adding to this concern.

Professionals interviewed as part of this evaluation thought that building strong, trusting relationships with families helped them to feel comfortable sharing personal information with EAC services. This was noted as taking time to develop, although in some areas this was already seen as a strength, with professionals noting that families had generally been very willing to share information. Families who took part in the process evaluation were also generally content with what they had been asked to provide. However, to increase the usefulness of the data collected, as discussed above, more detailed information on income, for example, would require to be collected, and collected at the outset, before relationships have time to develop. Providing a clear explanation around why EACs are collecting this information and that it is voluntary may also alleviate some of these issues. Engaging with specific concerns (e.g. impact on benefit entitlements) is also advisable.

Burden on families

Burden was a concern across areas. Projects were mindful of both the potential number of questions required to capture outcomes as well as the frequency with which they are issued to families. Clackmannanshire EAC noted a low response rate (below 20%) to their current feedback surveys. Some parents who took part in the process evaluation said they preferred to give feedback informally when there was a need. There was a sense that willingness to provide future feedback would depend on the length of time required and number of times asked. In Shetland, there was a worry that families in some areas have been subject to “over-consultation” in general.

Having short, focused forms should help to alleviate some of these challenges. It may also be beneficial to clearly explain to parents why regular information is needed, as some parents may not see the value in answering the same tracking questions if nothing has changed and they typically only feed back when they see a need. The mode of feedback forms may impact how easy it is for families to engage - the response rate to Clackmannanshire’s feedback survey had increased slightly when they had offered the option of paper completion as well as online.

Surveys by an external evaluator

A survey led by an external evaluator (with support from EACs) could also contribute to the measurement of priority outcomes and address some of the challenges identified above, such as consistency across areas, lack of evaluation skills among EACs and reducing burden for EAC staff.

The optimum approach is to undertake a baseline survey, followed by repeat surveys at regular (e.g. six monthly) intervals. Ideally, surveys would contain a unique identifier, enabling the progress of individual families to be tracked. However, linking surveys to individual families using unique links creates some practical challenges. While external evaluators could create unique links, they would only be able to send them to families if the families had agreed to their contact details being shared for this purpose. Otherwise, EACs would have to administer the survey which would be burdensome and would require them to have the software to support it. It would be especially challenging in areas with multiple providers. These EACs would have to work with each provider to create spreadsheets containing each family and their unique link before emailing links to each family.

Alternatively, families could be asked to share identifying information within open link surveys to enable matching of future survey responses. However, this is less reliable than unique links. Where there are errors in responses, it would be more challenging to match the data, and there is a higher risk of missing data.

In either case, there may also be some concerns among families about doing a non-anonymous survey, particularly given the evaluation would be on behalf of the Scottish Government. This challenge can often be mitigated through the provision of clear information and reassurances, but mitigation strategies would depend on whether the survey is administered by EACs or an external evaluator. In the latter case, evaluators can ensure participant confidentiality by reiterating that they are independent from EACs and the Scottish Government and reassuring participants that no identifiable information will be shared outside the evaluation team, and that questionnaire responses will be stored separately to personal data.

There is also the option of a survey that does not rely on baseline measures and instead asks parents for their reflections on the intended outcomes. This could either be done using a one-off survey designed and delivered by an external evaluator, as part of a formal evaluation, or on a more frequent basis, delivered by EACs (similar to how they administer their own feedback forms). However, given the limitations of the data that would be collected from this type of survey, we do not feel that the benefits gained would outweigh the cost and burden involved in delivering it. This is further supported by the likely small sample size that would be achieved due to the relatively small number of families engaged and the current low response rates to surveys (such as in Clackmannanshire).

For these reasons, the introduction of surveys administered by an external evaluator is not easily recommended as decisions about the content, timing, and approach to administration all require further consultation and buy-in among EACs and providers. In particular, the feasibility of data sharing to enable direct contact from evaluators (using unique links) and the willingness of EACs to change their registration/feedback processes warrant further discussion.

We would also advise against any surveys of children. The issues noted above in relation to parent surveys stand, and are amplified for children, whom it would be more burdensome to survey due to EAC staff having to be more heavily involved in the administration. The fact that staff would be required to complete it with children also raises concerns around honesty of responses with the possibility of children feeling they have to provide positive responses. The burden on children also needs to be considered.

A further drawback of surveys more generally relates to the challenge in attributing any improvements in outcomes to the EACs specifically. As discussed in more detail in stage 3, attribution of impact is a key consideration, particularly when families may be involved with multiple services.

Qualitative data collection by an external evaluator

Qualitative interviews provide an alternative means of measuring progress on the priority outcomes, and we recommend asking families (both parents and children) and professionals directly for their views and experiences. This will supplement the monitoring data by enabling a deeper understanding of the impact of EACs on outcomes. Furthermore, in depth interviews also allow for a level of attribution with researchers being able to probe on whether and how participants feel any changes related to priority outcomes have been brought about by engagement with EACs.

Qualitative interviews would be best undertaken by an external evaluator with expertise in this method and independence from participants, as was the case in the process evaluation qualitative fieldwork conducted as part of this work. Indeed, our recommended approach would mirror that, with the exception of the content of the interviews being instead focused on outcomes rather process.

Interviews would be structured using a discussion guide designed by an external evaluator in collaboration with the Scottish Government. Parents would be asked about outcomes relating to themselves as well as their child/ren while interviews with professionals would explore their views on outcomes predominantly at the system level. Professionals can also provide valuable insights about family outcomes given they can take into account multiple families, though this comes with the caveat that their views are indirect accounts and could be prone to bias.

Family interviews – We recommend longitudinal face-to-face interviews with a sample of families from each EAC, including, as far as possible, a spread in terms of eligibility criteria. Conducting interviews at two points in time (ideally including at the start of their engagement with EAC services) facilitates an understanding of what has changed over time, specifically priority outcomes such as progression towards employment, parental wellbeing, and child outcomes.

As in the early process evaluation, we would suggest that interviews are conducted at EAC settings (or another convenient location), with parents and children interviewed separately for convenience and to enable them to talk openly. The face-to-face approach is helpful in establishing a rapport, particularly with children.

In line with general concerns about placing burden on families as part of evaluation activities, it would be important to take a flexible approach to interviews in terms of mode, timing and location. For example, offering to split interviews up into shorter conversations and meeting families at a time/location which is convenient for them.

During the early process evaluation, families were recruited via EAC providers as there was no permission in place to share family contact details for this purpose. This approach adds to the burden on providers as well as having the potential for selection bias. It is worth exploring the possibility of including consent for contact for evaluation as part of application forms, allowing an external evaluator to make contact with families directly. This would also contribute to alleviating concerns about burden, by avoiding contacting families who are not interested in taking part. However, there may be families who would feel more comfortable participating if approached by EAC staff, with whom they have an existing relationship. As such, the preferred approach would be for EACs to collect consent for contact at the application stage and share key information with an evaluator. In turn, the evaluator would select the sample and request the EAC staff to make the first contact to introduce the evaluation before handing over to the evaluator. If it is not possible to gain consent for contact during application, an alternative could involve EACs providing anonymised information with key characteristics that an evaluator could use to select a diverse sample. As noted above, there is the potential for secondary data to be used to assess the extent to which EACs are reaching eligible families in their area. If application forms requested consent for evaluation as described, there is the further option for an external evaluator to make contact with families who applied or registered but chose not to take up the provision. Alternatively, evaluators could work with EACs and other local organisations to identify eligible families who have not used EAC services. Following consent procedures, evaluators could invite families to take part in interviews to explore reasons why they have chosen not to take up the service.

Professional interviews – While fewer issues are anticipated in relation to interviewing professionals, it will be important to consider the burden here due to the potential for EAC staff and the wider sector being particularly busy with other commitments. Therefore, as in the process evaluation, face-to-face and virtual options would be required, with flexibility offered. Again, the preferred option would be for EAC leads to facilitate recruitment. However, in order to minimise selection bias, external evaluators could ask EAC leads for a list of all partners and their roles, and the sample selection undertaken by the external evaluator.

Secondary administrative datasets

In addition to data collected by EACs or by an external evaluator, existing administrative datasets (secondary data) have the potential to contribute to the measurement of outcomes. It could also help EACs to understand the extent to which they are reaching their eligible families. The viability of potential datasets has been explored, as discussed below.

Poverty datasets

Given the policy focus on child poverty, there are a number of datasets available for reporting, such as the Family Resources Survey. However, the extent to which they are useful in the evaluation of EACs is limited by the level at which they report. For instance, most only go as far as the local authority level, which would be too large an area from which to draw any inference of the impact of EACs, which operate in small areas within local authorities. Examples of such datasets include: Households below average incomes, Children in low income families publication and Scottish Household Survey data.

Administrative data on benefits and employment income

A further option is to submit data access requests to HMRC, Social Security Scotland and DWP for administrative data. This would facilitate the analysis of any changes in families’ income from employment and benefits from before the EAC to after their engagement. This would have a number of advantages:

  • providing a more accurate assessment of income/benefit receipt than would be possible through self-reporting
  • minimising the burden on EAC/provider staff and families by not having to collect it
  • positively impacting family engagement in the EAC by removing the need to ask for sensitive financial information
  • facilitating the possibility of a QED, whereby outcomes for EAC families are compared with outcomes for comparison families elsewhere in Scotland.

However, this depends on permission for access to this data being granted. Access requests are time-consuming and would require significant input from EAC leads, the Scottish Government and the contracted evaluator. There is no guarantee the application would be successful and, even if so, the timelines involved may be prohibitive in relation to evaluation timelines.

Another option for accessing datasets may be to use the Low Income Family Tracker (LIFT) Platform, which was noted in discussion with the Fife EAC team. LIFT allows local authorities to combine administrative datasets to track outcomes, including at the individual household level. This may offer a useful tool for EACs, though further scoping is required to understand whether and how an evaluator could access the datasets.

Administrative datasets, if accessible, could also be used by EACs to compare their reach with the number and profile of eligible families in their area.

Hospital admissions and GP contacts

As noted above, increased wellbeing is a priority outcome. With this in mind, we have considered the potential usefulness of administrative health data to measure this, for example, data on hospital admissions or GP contacts as evidence of potential positive impacts on the health of participants. However, it is unclear how any impact of EACs on wellbeing would present – increased GP use may be a positive sign of people feeling more confident accessing support while decreased GP use could mean their wellbeing has improved and they have less need to see their GP.

The requirement for NHS ethical approval is a further challenge. The application process is extensive and there is no guarantee of it being approved. Given these considerations, we do not recommend the use of secondary health data.

Wellbeing datasets

There are also datasets that capture wellbeing indicators, for example OCD Regional Well-Being and the Annual Population Survey. However, the use of such datasets is once again limited by the fact that the data is not granular enough to be analysed at a local EAC level.

Stage 3: Assessment of impact evaluation designs

HM Treasury Magenta Book (2020) provides a comprehensive overview of evaluation, and in line with their categorisation, impact evaluation can broadly be split into 1) theory-based impact evaluation methods and 2) quantitative impact evaluation methods such as experimental or quasi-experimental designs.

Overall, the findings of this assessment suggest that a theory-based impact evaluation is most feasible and appropriate for shorter-term evaluation (up to two years). Table 4.1 provides a summary of this assessment and further detail is included in the following sections.

Other evaluation approaches could also be suitable in the short-term, as new EACs are identified, though these would not assess impact and are therefore not discussed in detail here. For example, developmental evaluation could inform how local EAC projects are developed. This could provide insights where areas are trying different approaches and there is uncertainty about what works best. This would also provide an opportunity to test if the local-level EAC theory of change is suitable for new areas, or to co-design monitoring and evaluation metrics and tools.

Table 4.1: Summary assessment of impact evaluation approaches
Approach Description Challenges and limitations Assessment
Theory-based impact evaluation approaches This focuses on explaining how an intervention leads to its intended outcomes. These methods examine the causal mechanisms and alternative explanations to strengthen the understanding of the intervention's impact and its generalisability to other contexts.
  • Unable to provide a quantitative causal effect
  • Limited in ability to estimate the magnitude of impact i.e. the extent to which the programme contributed to the outcomes
  • Can often rely more heavily on data prone to bias
  • Can be less insightful if causal pathways are highly non-linear
Recommended and suitable for both short- and longer-term evaluation
Experimental randomised controlled trial (RCT) RCT designs consist of randomising an intervention to the target population i.e. eligible families or areas randomised to receive EAC investment. If appropriately conducted, RCTs solve issues of bias due to selection on unobservable characteristics. They can estimate a quantitative causal effect.
  • Programme already rolled out in areas, precluding randomisation for existing EACs
  • Ethical concerns given strong evidence base on benefits of childcare
  • Needs consistency in the type of intervention/ population and outcome metrics/data collection tools
  • Sample sizes unlikely to be sufficient to detect an effect
Discounted
Quasi-experimental designs (QED) QEDs can also estimate a quantitative causal effect. They apply statistical techniques to enable comparison between a treatment group and a comparison group.
  • Needs high degree of consistency across outcome metrics and data collection tools
  • Current sample sizes unlikely to be sufficient to detect an effect
  • Challenges limit use of monitoring and secondary datasets (e.g. data sharing), though these might be feasible to overcome over time
  • Likely to have significant challenges in identifying a suitable comparator due to how EACs are selected (i.e. areas of particularly high deprivation) and the potential for further roll-out
Discounted in short term, potentially feasible in longer term (beyond 2 years)

Theory-based impact approaches

Theory-based impact evaluation approaches focus on understanding and explaining how an intervention leads to its intended outcomes. They rely heavily on the theory of change that outlines the causal chain from the intervention’s activities to its desired effects, taking into account the underlying assumptions and contextual factors. These approaches aim to go beyond simply measuring whether an intervention worked, to explaining why and how it worked. They involve testing the linkages and assumptions of the theory of change using various data collection and analysis methods, identifying where the theory of change is supported by evidence and whether alternative explanations can be ruled out.

Theory-based approaches tend to work well for handling complex programmes within complex contexts. They provide a framework and often a systematic approach to test the theory of change. They look at whether outcomes are observed, to what extent the programme contributed to this change and the relative contribution of other programmes that influence similar outcomes. Theory-based methods are also well-suited to examining how this varies across contexts and people i.e. what works, for whom, and under what circumstances? Another feature is that a theory-based approach can be complementary to a quantitative counterfactual impact evaluation, should this be feasible in the longer term.

Multiple theory-based impact methods were considered, yielding two key recommended approaches.

1. Realist evaluation: Realist evaluation seeks to understand how an intervention works by identifying the underlying causal mechanisms and how they operate in different contexts. This approach goes beyond simply measuring outcomes and aims to understand "What works for whom, in what circumstances, in what respects, and how?" This approach would seek to gain a deeper understanding of the complex interplay between context, mechanisms, and outcomes (CMO) for EACs. Realist evaluation is designed to test these CMO configurations.

2. Contribution analysis: Contribution analysis follows a step-by-step process that seeks to develop and test the theory of change. It does this by articulating a set of contribution claims, which could be developed using the causal pathways included in Chapter 2 as a starting point. The approach also has a specific focus on identifying and testing whether any other factors are influencing the same outcomes of interest, called alternative explanations. As evidence is gathered, it can be mapped as either supporting or refuting the theory of change – specifically the contribution claims – or alternative explanations.

An added benefit of taking a contribution analysis approach is that a similar design is being applied to the Child Poverty Pathfinders. This presents a potential opportunity for synthesis of the evaluation findings, which can be more challenging where different (theory-based) methods are employed.

Data collection

Either approach would need to draw on both quantitative and qualitative data and evidence, which would be triangulated, including:

  • Monitoring data collected by EACs: This is key to track both the levels and types of childcare and family support provided as well as take-up, which provides critical information about whether there are changes in the number of target families accessing school age childcare.

Ahead of an impact evaluation, it is strongly recommended that a core set of questions are developed and collected from families when they start accessing childcare, building on and refining the application forms already in use. EACs should be supported to improve their monitoring and evaluation processes so that they are better able to support an impact evaluation. This should include consistent ways of gathering:

  • Family and child characteristics e.g. child age, disability and additional support needs
  • Parental education, training, or employment status, including differentiating part-time and full-time work
  • Parental/household income (categorisation preferable to not being collected)
  • Hours of childcare and/or types of family support accessed
  • Other support being accessed (where feasible).

Outcome measurement: EACs can provide vital information about the provision and take-up of childcare, therefore giving insights about whether there has been a change in access to childcare over time within their local community and among families.

As discussed in stage 2, there will likely be challenges for EACs collecting other outcome data. However, where possible, it would be ideal for EACs to gather any changes to the information above at regular intervals i.e. every 6 to 12 months.

Other outcomes, such as parental and child wellbeing, are unlikely to be captured through EACs. A future evaluation could consider primary data collection including a survey of families, but there are a number of risks associated with this – namely reliance on EACs as gatekeepers (which increases burden on staff) and low response rates.

  • Longitudinal interviews / focus groups with EAC and partner staff: These would actively explore the causal pathways / contribution claims and other contextual factors influencing outcomes. A minimum of five interviews at two timepoints per EAC is recommended.
  • Longitudinal case studies with families: Longitudinal data collection is particularly valuable to understand change over time. Case studies that follow families over an extended period of time (e.g. 12 months) would provide insights about how they are using childcare and family support, and whether this has supported any changes for their family e.g. parental employment, financial pressure, parental and child wellbeing.

Ethical considerations

Compared with some other impact evaluation approaches (e.g. RCTs), theory-based evaluation methods tend to have fewer ethical concerns. Standard considerations will apply, such as ensuring informed consent, avoiding harm for participants and researchers, minimising burden, and ensuring confidentiality.

Limitations and risks

There are a number of limitations for theory-based evaluation. First, it is unable to provide quantitative causal effects unlike RCTs and QEDs, meaning it is often considered a less robust impact methodology. Instead, it will enable the development of an evidenced and logical line of reasoning which would give some level of confidence in the contribution of EACs to observed outcomes. Second, it can be limited in its ability to estimate the magnitude of a programme’s impact on outcomes due to the lack of a counterfactual. In more complex systems, it becomes increasingly difficult to unpick the role of the programme compared to other programmes. Third, it can be heavily reliant on qualitative data. While this provides rich data that is also capable of capturing unintended consequences, qualitative data can have higher levels of bias. Finally, some theory-based methods may lack insight when causal pathways are highly non-linear. For example, contribution analysis assumes a degree of linearity between activities activating mechanisms which then result in outcomes.

Quantitative counterfactual impact designs

A counterfactual impact evaluation would seek to establish the causal effect of the programme, relative to a scenario where the investment in EACs did not happen. To proxy the counterfactual scenario, a group of areas (comparison group) should be found that can be considered equivalent in relevant respects to the EACs.

Randomised controlled trials

An RCT was discounted for the following reasons:

  • The programme has already been rolled out so it would not be feasible to randomise the existing EACs.
  • While it might be feasible to randomise new areas eligible to be selected as EACs (i.e. some would become EACs and some would not), this raises ethical concerns given a strong evidence base on the benefits of childcare.
  • Furthermore, randomising areas would constitute a cluster RCT. The ability to detect an effect is related to the number of clusters. As such, the programme would need to launch a significant number of EACs to avoid the trial being underpowered.
  • While randomising at the family level would help with sample and power calculations, it would raise more significant ethical concerns given that families in need of childcare, which would be locally available, would be prevented from accessing it.
  • RCTs require more standardised implementation and outcome measurement than what has been deemed feasible here.

Quasi-experimental designs

A QED is not deemed feasible in the short term for the following reasons:

  • Primary data collection to measure outcomes (e.g. surveys) with families in EACs as well as families in non-EAC areas would not be feasible. This means a QED would rely on secondary administrative datasets.
  • Area-level secondary datasets (e.g. child poverty levels) are not available at a sufficiently local level (e.g. covering the whole of Dundee city as opposed to the Linlathen area). Using higher-level data would dilute the effect of the programme, making it more difficult to detect.
  • Individual-level secondary datasets (e.g. employment, income and benefits) are difficult to access and require lengthy data sharing processes. As part of a QED feasibility study for the Child Poverty Pathfinders, based on conversations with the SG UK Data Sharing team, the average expected timeline to receive any data is around 18 months. Furthermore, it would be necessary to identify families who are accessing EAC-funded childcare to define the treatment group. This would require EACs to share personal data with an evaluator, who in turn would seek to link it to secondary datasets. This is associated with further challenges, for example, the quality of monitoring data to enable linkage and seeking consent for this.
  • As of March 2024, the four EACs supported a total of 386 families, encompassing 514 children. Assuming similar numbers moving forwards, the sample size is unlikely to be sufficiently large to detect an effect. This is especially the case given the likelihood that outcome data will not be available for all families. To estimate sample sizes for a well-powered study, power calculations require the size of the population of interest and the size of the effect expected for the primary outcomes (e.g. employment). There are challenges estimating this at present due to missing information on the number of eligible families/parents. However, taking employment as an illustrative example, a sample size of at least 800 would be required to detect an effect of 0.1. This represents a relatively large effect. A more modest effect size of 0.05 would require a sample size of around 3,000. Another option would be to conduct a pilot study to test the QED approach, which can provide valuable insights to inform a larger QED study. However, this would not usually expect to find statistically significant results.
  • Identifying an appropriate comparison group could also prove to be challenging, for example, if the programme is rolled out nationally while the evaluation was ongoing. Two options include:
    • eligible families within EAC areas that do not use EAC provision, though there may be unobservable differences that drive them not to use EAC provision.
    • eligible families in non-EAC areas that have similar characteristics to EACs (i.e. deprivation levels, existing school aged childcare provision), though it would be important that these areas are not selected as EAC areas during the evaluation timeframes.

A QED may be feasible in the longer term if significant efforts were made to:

  • Secure access to secondary datasets (including building this into evaluation timeframes).
  • Improve the quality of monitoring within EACs and ensure data sharing is feasible to link EAC monitoring data and secondary datasets.
  • Increase the sample sizes by scaling up the programme.
  • Further scope the feasibility of identifying an appropriate counterfactual.

Contact

Email: socialresearch@gov.scot

Back to top