Scottish Household Survey: response rates, reissuing and survey quality

This paper assesses the impact of reissuing on survey estimates using data from the Scottish Household Survey, 2014 and 2016.


5 Conclusions

5.1 Five broad conclusions can be drawn from the results.

5.2 First, those who respond at first issue were broadly similar to those who respond at the reissue stage. There were relatively small differences between the two samples before any weighting has been applied. Moreover, most of the characteristics where there are notable differences between the unweighted samples (for example age, sex and region) are characteristics that form part of the approach to the weighting. This means that the impact of these differences on weighted estimates may be less marked.

5.3 Second, after weighting, the impact of increasing the response rate through reissuing on national estimates was relatively small. A decrease in the response rate of around 10-11%, through excluding reissue interviews, resulted in an average absolute change of less than half of one percentage point for the twelve key national estimates examined. The largest impact was 1.13% percentage points for the estimate of volunteering in 2016. Adjusted to take account of sample sizes and prevalence levels, the average change was equivalent to around three-quarters of one standard error. Overall, only 3 of the 24 measures had a standardised difference of more than 1.5 standard errors and the maximum impact found was 2.07. Therefore, for most estimates, the impact was small and unlikely to affect conclusions drawn from the data.

5.4 Third, for estimates among key sub-groups, the impact is also small in relative terms. (The impact in absolute terms is larger than for national estimates. However, this is primarily because these estimates themselves are less precise because they are based on smaller sample sizes.) The impact was less than half of the standard error for the majority of estimates and was greater than 1.5 the standard error for less than 3% of the 704 sub-group estimates examined. Again, this means that most (but not all) of these differences are unlikely to have a meaningful impact in practice.

5.5 Fourth, the scale of the relative impact was similar across the two waves. The average difference between both the absolute and the standardised measures were similar across the two waves and the only differences were confined to a very small number of outlier values.

5.6 Fifth, the analysis does suggest that the relative impact may be greater in some measures than others. Estimates relating to the proportion of people saying that they made one or more visits to the outdoors per week were more affected by reissuing than the other measures. This might be partly due to the fact that significantly fewer reissue interviews are conducted in rural areas. Similarly, the analysis also suggests that the relative impact may be greater in some sub-groups than others, namely Single Adult household. Again, this appears to be driven, at least in part, by the proportion of reissue interviews undertaken with particular subgroups.

5.7 Overall, these findings echo previous findings that the link between response rate and non-response bias is weak. As such, response rates are not a good indicator of the quality of survey estimates and should not be used as a singular proxy for survey quality. Additionally, further consideration could be given to the drivers of survey quality and whether a reduction in the response rate target with a more targeted approach to reissuing would be beneficial in the future.

Contact

Email: shs@gov.scot

Back to top