Building standards - verification service: customer experience evaluation - future model
This research identified and proposed a preferred model which the Scottish Government (Building Standards Division) could use to deliver the national customer survey for building standards.
3. Options for a future model
3.1 Taking into account the considerations detailed in the previous chapter and the findings from the first phase of the research, three potential future models for measuring the customer experience were developed for onward discussion and refinement. This included the possibility of mixing and matching elements from these possible models to form an alternative variant, and these outline future models were intended to be viewed as preliminary and very much open to refinement throughout the course of second phase of the research. The three models identified can be summarised as:
1. Continuing with a similar approach to previous years with minor adjustments
2. Quarterly monitoring with automated data analysis and reporting
3. Rolling survey (open year-round) with star ratings
Option 1: Continuing with a similar approach to previous years with minor adjustments
Overview of approach
Proposal
This option is to continue with the national customer satisfaction survey in a similar way to previous years, but with some minor alterations to address the concerns and issues raised. The suggested changes for future editions are noted herein (see Appendix).
Frequency
Annually, but in order to reduce the time lag of previous years, it is suggested that evaluation of the customer experience begins as soon as possible after 31st March. This is the cut-off point for customers to be in scope of the survey.
Question set
The proposed national survey questionnaire has been shortened from previous years and will allow historic benchmarking and comparisons. Questions perceived as no longer relevant have been tentatively removed and options around virtual inspections and meetings have been inserted.
Contacting customers
Local authority verifiers would collate customer contact details for the financial year in scope, as before. The survey link would be issued by email (with reminders) and the exercise promoted widely on social media and through other appropriate channels.
Reporting and presentation
The main, national report would retain its core structure and layout with all key findings as before. At a local level, 32 local authority verifier reports would be produced, and these would be significantly streamlined to a shorter (2-3 page) document providing “at-a-glance” results of key findings and metrics only. Open-ended comments would still be provided in Excel format. Consortium reports would be discontinued.
Administration
It is proposed that this option is administered overall by an independent third party, working closely with the Building Standards Division.
Timelines
The research start date should be brought forward to earlier in the year and ideally run from April to August rather than August to December as before (cf. ’Frequency’ above). The period for which the survey is open to responses can remain at one month.
Views from the BSD Review Group and stakeholders
Strengths of Option 1
This option largely maintains the status quo of how the survey has been undertaken in previous years, and stakeholders note this is a “tried and tested process”. Furthermore, local authority verifiers foresee no major work being required to implement this option as all systems are already set up and all those involved are familiar with the processes.
Participants note that retaining the same data collection method and most of the same core questions means that year-on-year comparisons of findings can continue.
Shifting the timings of the survey research to four months earlier (compared to previous waves of the survey) reduces the time lag between customers’ interactions and their chance to provide feedback, thus making it more relevant to customers. Further, participants recognise that moving the survey four months earlier will mean that their rating will also be less out of date compared to previous survey waves.
A slightly shorter question set compared to previous years could boost survey engagement, thus potentially improving response volumes and the response rate (%).
By continuing to undertake the customer satisfaction survey on annual basis, there can be a focused promotional push to raise awareness of the survey to potential customers, for instance a dedicated drive on social media.
The removal of consortium reports (compared to previous survey waves) will reduce the time and resource required at analysis and reporting stage.
Customers will continue to be reassured that the survey is run by an independent third party and that their responses will be treated impartially.
Limitations of Option 1
Whilst the time lag would be reduced compared to previous waves (as noted above) there is still no chance for customers to provide immediate feedback – indeed, some customers might have interacted with their verifier up to 12 months previously in this option – resulting in gathering data that are potentially out of date, vague and/or inaccurate.
There is concern that this option (and the other options presented) continues to potentially duplicate existing local approaches to measuring the customer experience. That could mean that two sets of questions in two separate surveys may confuse or over-burden customers, which could adversely affect response rates.
As an annual exercise, local authority verifiers would have to use the rating arising from this survey for 12 months and would not be able to update their performance any more regularly.
Stakeholders highlighted that the question set is still reasonably long which may be off-putting for potential respondents to complete.
Estimated cost implications of option 1
3.2 It is envisaged that the level of resource required to undertake this option would be approximately similar to previous years, that is, around £15,000 per annum (exc. VAT). Improvements to survey questions and design, look and feel and to accessibility (e.g. so suitable for mobile devices) would cost approximately £2,000, for the first year only.
Option 2: Quarterly monitoring with automated data analysis and reporting
Overview of approach
Proposal
The national customer satisfaction survey is proposed to operate in a similar way to previous years through set-up and fieldwork phases. Major changes in this approach are that the exercise is undertaken quarterly, and that data analysis and reporting are automated.
Frequency
Quarterly (every three months).
Question set
The proposed national survey questionnaire is identical to that proposed in option 1, i.e. a very slightly shortened question set that retains the core structure from previous years that will allow historic benchmarking and comparisons (see Appendix).
Contacting customers
Local authority verifiers collate customer contact details every three months. The survey link is issued by email (with reminders).
Reporting and presentation
Reports in Word format would no longer be produced. Instead, results would be presented visually on a dedicated webpage in graphical format. The website would have dedicated pages for the national level findings, and each local authority. Charts would demonstrate quarter-on-quarter trends. Excel workbooks with open-ended comments would be provided by the third party administering (see below) to each local authority verifier.
Administration
It is proposed that this option is administered overall by an independent third party, working closely with the BSD.
Timelines
Collating contact details and set up would take a month, the survey would run for two to three weeks, and analysis and reporting would take two weeks at most, meaning outputs could be provided just over two months after each quarter ends.
Views from the BSD Review Group and stakeholders
Strengths of Option 2
The Review Group and stakeholders embraced the type of reporting output in this option, supporting the shift in the presentation of findings from a Word output (as in previous years) to a much more visual output using an online dashboard. In addition, local authority verifiers commented that they also support the retention from the 2020 model of the Excel workbooks as an output, as the open-ended comments in these workbooks allow verifiers to build a fuller picture of their service.
Moving from an annual to a quarterly exercise means that local authority verifiers could update their rating more regularly. It would also enable them to act faster and on the basis of more up-to-date information.
Retaining the same data collection method and the same core question set to previous years means that year-on-year (and, going forward, quarter-on-quarter) comparisons of findings can continue.
Customers can continue to be reassured that the survey is run by an independent third party and that their responses will be treated impartially.
Limitations of Option 2
There was concern raised that undertaking this exercise on a quarterly basis may quickly result in survey fatigue, particularly if the question set used is extensive (the proposed option is 24 questions long) and that this would have an adverse impact on overall response rates in the mid to long term.
Local authority verifiers raised concerns that this option would create more work for verifiers as they would be asked to collate contact details of their customers four times a year, which would be an additional administrative burden for them.
Local authority verifiers also raised concerns around the time and resource which might be required on their part to review fully the data arising from a longer survey each quarter and to read and fully digest the analysis and feed this into their performance reports.
Estimated cost implications of option 2
3.3 Costs for the initial phase each quarter, i.e. gathering contact details and undertaking the fieldwork, will be similar to those associated with the current survey in previous years. Multiplied by four, this works out at around £16,000 per annum. There will also be increased resource implications for local authority verifiers to be obliged to provide contact details each quarter.
3.4 Analysis and reporting stages may involve a significant amount of set-up and testing, likely involving an experienced IT consultant with specialist knowledge to set-up “behind the scenes” processes so that survey findings can be instantly transformed into reporting outputs. Specific conversations are required to determine the precise amount, but an approximate cost for set-up and testing might be £25,000.
3.5 Once set-up, a small amount of time would be required each quarter to run analysis and reporting, and checking that automated processes are running as intended. The approximate annual cost to undertake this exercise four times per year is £5,000. An annual subscription fee of approximately £2,500 would be required to present reporting outputs visually via an interface such as Displayr (although less costly examples are available as noted in Chapter 2).
3.6 Therefore, this option would incur a cost of £48,500 in year 1, and £23,500 in years thereafter, not adjusting for inflation. Note: all costs included here exclude VAT.
Option 3: Rolling survey (open year-round) with star ratings
Overview of approach
Proposal
A short, sharp survey for customers to complete at one of two stages: at approval of building warrant, or acceptance of the completion certificate.
Frequency
Rolling/ongoing survey (open all the time) with quarterly reporting outputs.
Question set
A short question set (format and questions to be agreed) of potentially six to ten Likert style questions followed by a final open text box for wider comments.
Contacting customers
Customers would have the chance to provide feedback either following building warrant approval, or acceptance of the completion certificate. The link to the feedback survey would be contained in the confirmation email to customers.
Reporting and presentation
Reports in Word format would no longer be produced. Instead, results would be presented visually on a dedicated webpage in graphical format. The website would have dedicated pages for the national level findings, and each local authority. Charts would demonstrate quarter-on-quarter trends. Consortium analysis and reporting would be discontinued. Excel workbooks with open-ended comments would be provided by the third party administering (see below) to each local authority verifier.
Administration
It is proposed that this option is administered by an independent third party to have ownership of the survey questions and oversee the data analysis and reporting, working closely with the Building Standards Division. As noted, customers would receive the feedback link on building warrant approval or on acceptance of the completion certificate automatically through the building standards systems.
Timelines
Responses would be gathered on a ‘rolling’ basis, meaning data from one quarter could be analysed immediately at quarter end. With automated processes, analysis and reporting would take two weeks at most, meaning outputs could be provided around a fortnight after each quarter ends.
Views from the BSD Review Group and stakeholders
Strengths of Option 3
Workshop participants broadly believe this option to be “workable” and that it would be reasonably straightforward to tie processes into existing systems and processes.
Workshop attendees and stakeholders largely agree that a much shorter question set would make the survey more attractive to respondents – especially repeat respondents such as agents – which could minimise survey fatigue and potentially mitigate against any adverse impact on the response rate arising through gathering feedback more regularly.
Sending out the survey link in the email accompanying the building warrant approval or acceptance of the completion certificate provides customers the opportunity to provide feedback on their experience immediately after a part of the process, so that their thoughts are still fresh in their mind when responding, meaning the feedback is highly relevant. Providing two points of feedback (BW and CC) will also mean that analysis of the process is more granular.
The Review Group and stakeholders embraced the type of reporting output in this option, supporting the shift in the presentation of the findings from a Word output (as in previous years) to a much more visual output on an online dashboard. Stakeholders comment that a contemporary dashboard/visual output is now expected by readers in preference to a Word report for this type of exercise, and note that, if respondents can see how their local authority verifier is performing year-on-year/compared to other verifiers, this easy view of results will enhance their buy-in/motivation to participate.
In addition, local authority verifiers commented that they also support the retention from the 2020 model of the Excel workbooks as an output, as the open-ended comments in these workbooks allow verifiers to build a picture of their service.
Moving from an annual to a quarterly exercise means that local authority verifiers can update their rating more regularly. It will also enable them to act faster and on the basis of more up-to-date information.
Workshop attendees commented that, sending out the survey link to customers within the same email of building warrant approval or acceptance of the completion certificate would mean that the survey link can be tied into existing processes. Further, they note that this would reduce the existing administrative burden for local authority verifiers to collate and share customer contact details, as they have been obliged to do so in previous years.
Participants discussed that this option allows customers to provide feedback on each individual experience/interaction, rather than at a single occurrence each year, potentially allowing verifiers to unpick experiences in greater detail.
Customers can continue to be reassured that the survey is run by an independent third party and that their responses will be treated impartially.
Limitations of Option 3
Workshop participants raised concerns about the suggestion in this option to gather feedback at two stages and were unsure whether it was indeed necessary to survey customers at both approval of building warrant and acceptance of completion certificate. These participants noted that customers can have less interaction with building standards at completion certificate stage, and that customers may believe they are providing feedback related to planning, rather than building standards, potentially leading to irrelevant survey data. (Note: this is a risk within all options presented, and an existing risk within the current model, with terms used interchangeably by some customers.) Other participants noted that gathering feedback at two stages may actually bring benefits as the customer is not necessarily the same at the two points.
Changing the question set would result in some loss of comparability to previous years’ performance data gathered through earlier waves of the customer satisfaction survey. However, while this historic comparability may be lost, local authority verifiers would still be able to benchmark themselves against one another using any new question set.
Participants discussed the survey response rate at length. Several verifiers noted that they currently use an “exit” survey approach to gather feedback (i.e. including a link in an email at, say, building warrant approval stage) but that the response rates are typically low (below 10%). There was concern that, if a survey link is to be provided as a standard part of the warrant approval email, that the email may be lost, or that agents dealing with multiple applications per year may ignore the email and not respond at all, thus adversely affecting response rates. Furthermore, local authority verifiers raised concern that a small number of responses per verifier per quarter may result in their performance rating for that quarter being based on a very small number of responses.
The annual approach which has been used in previous years to gather customer feedback provides a single focus point in the year where the exercise can be promoted. Moving to gathering feedback on an ongoing basis may make it more difficult to undertake promotion to raise awareness.
Estimated cost implications of option 3
3.7 An initial set-up cost will be required to develop the new question set, and to input this into survey software and test, at a cost of approximately £2,500. This question set could be reviewed and, if necessary, altered on a regular basis at a small additional cost.
3.8 Analysis and reporting stages may involve a significant amount of set-up and testing, likely involving an experienced IT consultant with specialist knowledge to set-up “behind the scenes” processes so that survey findings can be instantly transformed into reporting outputs. Specific conversations are required to determine the precise amount, but an approximate cost for set-up and testing is £25,000.
3.9 Once set-up, a small amount of time would be required each quarter to run analysis and reporting, and checking that automated processes are running as intended. The approximate annual cost to undertake this exercise four times per year is £5,000. An annual subscription fee of approximately £2,500 would be required to present reporting outputs visually via an interface such as Displayr (although less costly examples are available as noted in Chapter 2).
3.10 Therefore, this option would incur a cost of £35,000 in year 1, and £7,500 in years thereafter, not adjusting for inflation. Note: all costs included here exclude VAT.
Wider considerations when selecting a preferred option – views from the BSD Review Group
3.11 Workshop participants discussed the frequency of the exercise (for example, customer satisfaction is evaluated annually in option 1), noting that gathering feedback more regularly could be overly burdensome for customers. In this regard, there was discussion around whether differing approaches could be taken to gather feedback from agents and from direct applicants, with a suggestion that applicants’ thoughts could be gathered immediately after their BW approval/CC acceptance, and that agents could also be contacted directly after each these points, but that (say) an annual push/promotion could be undertaken, directly targeted at agents, to gather feedback from this cohort. Such a change to (say) the preliminary option 3 model would have financial and resourcing implications to the tentative costing set out above, and may also potentially disadvantage any agents who wished to provide feedback immediately.
3.12 There was discussion around which part of the customer experience should be evaluated. Some workshop participants noted that they saw the “core” of the interaction occurring up to the point of building warrant approval, rather than the following period between warrant approval up to acceptance of the completion certificate, despite site inspections being a key function.
3.13 Some workshop participants noted that the presented options might continue to duplicate existing local survey activity and that, if customers received two sets of questions in two separate surveys, this could add to their burden and reduce response rates.
3.14 The annual approach which has been used in previous years to gather customer feedback provides a single focus point in the year where the exercise can be promoted. Moving to gathering feedback each quarter could reduce the positive impact that any awareness-raising campaign might have on the response rate. Workshop participants also noted that promoting the survey on a quarterly basis might result in overlap and/or duplication with other strands of verifiers’ communications strategies, potentially leading to customer confusion.
3.15 Regarding questionnaire design, and while noting that a shorter question set in option 3 would increase its attractiveness to customers providing feedback, the Review Group commented that including an open-ended response question is extremely helpful to gather wider comments for them to aid continuous improvement of their services. Furthermore, one participant noted that any rating scale questions should be designed as “forced” Likert-style questions with no middle ground option.
Towards a future model
3.16 Having discussed the relative strengths, limitations, and considerations of each option in turn, the Review Group and stakeholders were asked for their thoughts on which of the three options would be the most appropriate to take forward as a future model. There was unanimous agreement among respondents that option 3, on the basis of the respective merits and drawbacks of each option, would be the best of the three options.
Contact
Email: simon.moore@gov.scot
There is a problem
Thanks for your feedback