Early Adopter Communities: evaluability assessment
This report presents the findings of an evaluability assessment for the school age childcare Early Adopter Communities. This includes considerations and recommendations for process, impact, and economic evaluations.
4. Process evaluation
Process evaluation is critical when evaluating complex interventions because it helps assess how an intervention is implemented in practice and explain why an intervention works (or not). It should therefore be included in any future evaluation and build on the process evaluation undertaken as part of this early evaluation work. This chapter outlines the key factors that should be considered by a future process evaluation.
Process evaluation aims
Future process evaluation would build on the process evaluation undertaken as part of this early evaluation work. This drew on the theories of change discussed in Chapter 2 and included in Appendices A and B, focusing on the ‘activities’ in particular. The process evaluation covered the following key topics: design and set-up of provision, direct delivery for families, and systems-level delivery.
While future process evaluation work would have more of an emphasis on exploring system change outcomes, it may also be worth retaining a focus on direct delivery, particularly if new EACs (Fife and Shetland) are included. This will also mean that the impact evaluation is accompanied by up-to-date process data. Evaluators should ensure that this data builds on and updates, rather than repeats, that included in the early process evaluation.
A future process evaluation should also focus on capturing learning, for example, the EAC projects can provide learning on how to incentivise the market to set up school-aged childcare services.
Proposed evaluation questions
System change
To align with other initiatives to improve child poverty outcomes, process evaluation questions should correspond, as far as possible, with those included in the Scottish Government Child Poverty Monitoring and Evaluation Framework. For example, a future evaluation should look to answer:
1. How successfully have activities been implemented to achieve system change?
a. What are the enablers that have facilitated system change and the challenges that have been experienced, and how have challenges been overcome?
b. What contextual factors have influenced the implementation and how?
c. What are the key differences across EACs?
d. What lessons for successful implementation can be drawn for other areas and for wider policy development?
2. To what extent and how have system-level outcomes been achieved?
a. What are the range of contextual factors that influence this?
b. What has prevented system outcomes from being achieved?
3. How have service users, service providers and wider stakeholders experienced the systems change? How has systems change influenced outcomes (as measured by the impact evaluation)?
Direct delivery
Beyond looking at system change, a future evaluation should also include process evaluation questions that examine direct delivery with families. This includes:
4. How are EACs implementing activities for direct delivery to families?
a. What kind of/how much childcare is provided?
b. What kind of/how much wider family support is provided?
c. To what extent are direct delivery activities being implemented as intended?
d. Were these delivered on budget?
e. To what extent were EACs set up and delivering in the targeted timeframes?
5. How effective are EAC processes in reaching and engaging the intended priority groups?
a. Have target levels of take-up been met?
b. How is provision promoted and communicated?
c. How are families referred into provision?
d. How accessible is provision for target families (including application/registration processes, locations of childcare settings)?
e. To what extent is there drop-off between sign-up and attendance, and what factors influence this?
f. How has this differed across the six child poverty groups?
6. To what extent does provision meet the needs of local families?
a. How has provision been co-designed with families and/or partners/stakeholders?
b. What are the views and experiences of families using the provision e.g. levels of satisfaction?
7. What are the enablers that have facilitated direct delivery and the challenges that have been experienced, and how have challenges been overcome?
a. What contextual factors have influenced implementation and how?
8. What are the key differences across EACs?
Key areas for exploration
In line with guidance on process evaluations[1], the process evaluation of EACs should aim to cover the following topics alongside the theory-based evaluation.
Implementation: what is implemented, and how?
Process evaluations explore whether the programme’s activities have been implemented as intended and resulted in the desired outputs. Understanding whether a programme has been delivered as intended is critical to interpreting impact evaluation findings. For example, if the process evaluation establishes that the programme was implemented with fidelity (as discussed in Chapter 3), then we can be more confident that the programme (and underpinning theory of change) has worked (or not). Alternatively, if the programme was not delivered as intended, it is not feasible to draw firm conclusions about the programme theory of change.
Mechanisms of impact: how does the intervention produce change?
It is also important for process evaluations to explore the mechanisms through which EACs bring about change. This is necessary to understand how outcomes were achieved and how this might be replicated in similar programmes. In doing so, the process evaluation – along with the theory-based evaluation – would test the assumptions made in the causal pathways.
Context: how does context affect implementation and outcomes?
Process evaluations also need to consider the context in which the programme is delivered, both at a community level and individual family contexts. This includes any external factors that may act as a barrier or facilitator to the implementation of the programme. As noted above, the EACs are all operating in different, complex systems with the implementation varying across areas. Understanding the context (via interviews with professionals and families and review of project documentation) is therefore vital in interpreting the findings and making any generalisations about how EACs may work beyond this context.
Timeframes
Although an early process evaluation has recently been conducted, there is scope to conduct further process evaluation work in conjunction with the impact evaluation. This would go beyond the early process work, which focused on design, set up and implementation, and further consider how, and to what extent, EACs are embedding within systems. This would also support further impact evaluation on the extent to which systems outcomes are being achieved.
Data collection methods
While the process evaluation questions would have a different focus to the impact evaluation questions, the methods used would be broadly similar to those in a theory-based impact evaluation. They are likely to include:
- analysis of monitoring data and documentation.
- qualitative interviews with professionals: staff involved in the delivery of EACs and wider stakeholders, including Scottish Government representatives.
- qualitative interviews with families using EAC services.
The combination of the views of professionals and families will provide a more rounded perspective of how processes are working.
EAC monitoring data
Approaches to monitoring varied between EAC areas depending on: the format of provision; the number of partner providers involved in delivery; time and resource available among EAC staff and providers; and input from stakeholders or partners (for example, Inverclyde did initial outcomes mapping work with Public Health Scotland, which informed their data monitoring strategy). It is also worth noting that the data collection challenges outlined in Chapter 3, in Table 3.2, all apply to collecting monitoring data as well as outcomes data.
In addition to meeting Scottish Government requirements, there were also cases where data from the EACs was being used to inform wider strategies, plans, and understanding of their local context. For example:
- In Inverclyde, EAC data feeds into their local child poverty reports, local employability partnership, financial inclusion strategy, community learning and development three-year plan, and their community plan.
- In Clackmannanshire, data feeds into the local authority KPIs which track progress on their Children’s Services Plan, and reporting on the Family Wellbeing Partnership.
- In Dundee, data is fed into various reports (such as child poverty reports), and they are working with the primary school in which the after school club operates to look at school attendance and other KPIs.
Assessment of monitoring data
EAC monitoring data can be split into three broad categories: family data; engagement and reach data; and feedback data. This is not consistently collected across the EACs and some gaps have been identified (see Table 4.1).
Data type | Clackmannanshire | Dundee | Glasgow | Inverclyde |
---|---|---|---|---|
Family data | √ | √ | √ | √ |
Engagement/reach | √ (some gaps) | √ | √ (some gaps) | √ |
Feedback data | √ | √ | √ | √ |
More detail on the data collected is summarised below. For a breakdown of data collected in each EAC, see Appendix D. The forms used to collect these types of data can be found in Appendices C and E.
Family data
Demographic, characteristic and financial family data is collected by EACs, primarily via application forms. This is also how EACs collect information on the number of families that fall into the priority groups at risk of child poverty.
It is worth noting that detailed demographic data required to identify marginalised groups, such as those relating to specific ethnic groups, gender, or sexuality, is not typically or consistently collected across the EACs. These types of questions could be added to application forms, but EACs were mindful of not making forms too long. The sensitive nature of these questions may also be off-putting for parents. Therefore, it would be important to emphasise that providing these details would be optional.
Currently, there are limitations in linking demographic data on families with outcomes data, because outcomes data is often collected via anonymous feedback forms. This therefore limits opportunities for exploring how the achievement of outcomes varies depending on family characteristics. In future, it is recommended that evaluators work with EACs to link family data and outcomes data.
Engagement and reach data
Data collected includes attendance (including breakdowns by provider and by priority family group), number of hours of provision delivered, number of families included in initial scoping/co-design, number of EAC applications received, number of referrals to family support/other services, and capacity of providers. Frequency of collecting these measures varies and is detailed in Appendix D. Once more, there are inconsistences across EACs in the collection of this data, with the main gaps described below. In addition to the routine collection of employment status data, collecting data on referrals to employability or family support services could also feed into the measurement of related outcomes.
The following gaps exist within the collection of engagement and reach data:
- Attendance data is a key measure (part of three out of four identified causal pathways) and notably, all EACs are able to provide at least an indication of how many children are attending EAC-funded childcare provision. However, due to the number of EAC providers, and in an effort to avoid additional burden for providers, Glasgow does not collect comprehensive attendance figures for children using an EAC-funded place. They collect a weekly figure from each provider showing the total number of children using an EAC-funded place each week, but this cannot be broken down further (e.g. by priority group).
The most robust approach to monitoring would be to introduce a process in Glasgow whereby providers begin to share attendance, or at least share a breakdown of attendance by child poverty target group (to align with the other EACs). However, considering potential burden and practical issues in doing this, an alternative approach is to continue to use EAC application data as a proxy measure to indicate the different types of families benefiting from and using EAC services. Glasgow providers generally felt attendance was high (and families who do not regularly attend will lose their funded place). Therefore, application data should be a reasonable indication of families attending EAC services.
The number of hours of childcare used by families is not consistently collected by all EACs at present. Ideally, this would be collected for future monitoring and evaluation to explore how the amount of childcare (i.e. dosage) influences the achievement of outcomes.
- Application and referral processes are slightly different in each EAC which can make it difficult to collect comparable information. For example, in Clackmannanshire, families are not always required to complete a formal application process. Some providers rely on families or referrer organisations declaring eligibility. In these cases, they do not fully capture family profile information when families first start using services, although often it can be backfilled using information shared via routine family forms.
When EAC staff receive referrals directly (e.g. in Dundee) this information is recorded. However, when there are a large number of providers (e.g. in Clackmannanshire), differing processes and systems make this a challenge to aggregate. Changing this would require getting all partners to come together and agree a new process, which could be difficult and time-consuming.
When parents self-refer (the main mode of application in Glasgow and Inverclyde), it is not currently known if they have been supported or signposted to do so by another organisation. To try and better understand this, Glasgow were considering adding a question to the EAC application form about how families heard about the project. This could be considered more generally across the EACs, although it should be noted that this would be based on self-declaration and would also not capture unsuccessful referrals.
Feedback data collected from families
EACs currently ask families for feedback on their provision, including on various aspects including what they like/dislike about it and how it could be improved. Feedback is collected via parents’ surveys and discussions with children. Frequency of data collection varied by EAC.
The following gaps exist in relation to feedback data collection:
- Feedback data from families on project processes is not collected in a consistent way across projects, either in terms of questions asked or timings. In Glasgow, where there are many different service providers involved in the EAC, processes differ between them, and routine service-level feedback is not currently shared with the EAC team directly.
Adding some standardised process questions to feedback forms that are already being used is something that could be considered, as well as standardising the times that this is collected. However, additional burden on providers would need to be accounted for.
Systems level outputs
These outputs do not tend to be actively monitored by EACs. However, discussions with EAC project leads suggest this information (e.g. number of partners they work with) would be accessible for reporting as part of an evaluation. Equally, data collection about some systems level processes (e.g. aligning EAC provision with other local initiatives) are better suited to reflective, qualitative discussion (discussed below).
Cost data
Available service costs data includes information on childcare providers (names, area and local authority), term time and holiday time capacity by provider, total project funding allocated by provider (term time and holiday time), and a breakdown of hours provided per day. While the approach to gathering cost data was not explored with EACs, it is recommended that it is routinely collected. In addition, a breakdown of spend by each childcare provider is recommended. For example, this could include spend on physical activities/opportunities for children, meals, or specialist support.
The evaluability assessment has also identified gaps in programme costs collection. In the absence of a business case or programme documentation it has not been possible to understand overall programme costs (such as staff time spent on developing the programme and the application process etc.) administrative costs, or implementation costs (such as promotional materials).
Contact
Email: socialresearch@gov.scot
There is a problem
Thanks for your feedback