Scottish Health and Care Experience Survey 2013/14 - Technical Report

Scottish Health and Care Experience Survey 2013/14. This is a postal survey which was sent to a random sample of patients who were registered with a GP in Scotland in October 2013. This report contains details of the survey design and development.


9 Analysis and Reporting

Introduction to analysis

9.1 The survey data collected and coded by Picker Institute Europe and Ciconi Ltd were securely transferred to ISD Scotland, where the information was analysed using the statistical software package SPSS version 21.0.

Reporting Patient Gender

9.2 Analysis of survey response rates by gender was done using the gender of the sampled patients, according to their CHI record.

9.3 For all other analyses by gender, where survey respondents had reported a valid gender in response to question 47, this information has been used in reporting. Where the respondents did not answer the question or gave an invalid response, gender information from the sampled patient's CHI record was used.

9.4 Self-reported gender was used where possible as in a small proportion of responses the reported information and the information on CHI differed. The most likely reason for this is that the questionnaire was sent to one patient but was completed by or on behalf of another one registered to the same practice (e.g. a recipient passing their questionnaire to a spouse).

9.5 In total, 111,682 responders (98.1%) provided a valid response to the question on gender (question 47). Of these, there was a difference between self-reported gender of the respondent and the gender of the originally sampled patient in 1,167 cases (1.0%). Amongst this group it was more frequently the case that a survey questionnaire originally sent to a male was responded to by a female (n = 671), than it was that a questionnaire sent to a female was answered by a male (n = 491). As practice contact rates are generally higher in females than males, one possible reason for this is that some male survey recipients may not have been to their practice in the past 12 months and passed their questionnaire to a female member of their household.

Reporting patient age

9.6 Analysis of survey response rates by age was done using the age of the sampled patients, according to their CHI record at the time of data extraction (22 October 2013).

9.7 For all other analyses by age where survey respondents had reported a valid age in response to question 48, this information has been used in reporting. Where the respondents did not answer the question or gave an invalid response, age information from the sampled patient's CHI record (as at 22nd October 2013) was used.

9.8 Valid age was taken to be anything between 17 and 108 years. A small proportion of cases where age was reported as less than 17 were treated as invalid responses to the question, although it is likely that in at least some of these instances the respondents were giving their feedback about their experience at the practice when making an appointment for their child, and in doing so reported the child's age rather than their own.

9.9 Self-reported age was used where possible in preference to age derived from the CHI record as in a proportion of responses the reported information and the information on CHI differed. Reasons for this include the questionnaire being sent to one patient but being completed by or on behalf of another one registered to the same practice (e.g. a recipient passing their questionnaire to a family member or spouse). In some of these instances, where the survey recipient and another member of their household had the same name (e.g. a father and son), the questionnaire may have been answered by the namesake of the individual sent the questionnaire.

9.10 In total, 110,736 responders (98%) provided a valid response to the question on age at last birthday (question 48). Of these, the self-reported age and the age calculated from the CHI record differed by two or more years in 2,001 cases (1.8%). In a further 17,722 cases (16%) there was a difference of one year. This is not unexpected, however, as many recipients would have had a birthday between 22nd October 2013 and the date they responded to their questionnaire (November 2013 - March 2014).

9.11 In many instances where the age calculated from the CHI record differed from the age reported by the survey respondents, the associated age group used in the national report remained the same, whether based on CHI or based on the survey response. In 1,875 cases the record was however counted under a different age group for response rate analysis to the one used for all other analyses. Of these, 1,461 (77.9%) were in an older group for the main analysis of results than for analysis of response rates. Some of this relates to individual recipients having a birthday and "moving up" by a single age group. In other instances this reflects the respondent being a different individual to the person sent the questionnaire and being more likely to be somewhat older than the originally sampled patient; older people were more likely to respond to the survey than younger people.

Table 11 Where reported age and CHI age groups are different

Age group derived from survey responses (Oct 2013 - Apr 2014)

Age group derived from CHI records as at 22nd Oct 2013

17 - 34

35 - 49

50 - 64

65 and over

Total

17 - 34

0

63

31

32

126

35 - 49

216

0

84

61

361

50 - 64

182

361

0

143

686

65 +

57

127

518

0

702

Total

455

551

633

236

1,875

Reporting deprivation and urban/rural status

9.12 Patient postcodes were used to match records to deprivation and urban/rural status information as defined by the Scottish Government. The versions used were:-

9.13 A small minority of records were not matched to deprivation or urban/rural information, for example because the postcodes were not valid or recognised by the reference files used in the matching. Table 12 below shows the numbers and percentages of records that were not assigned to a deprivation or urban/rural category.

Table 12 Patients that could not be assigned urban/rural or deprivation categories

n of all responders

% of all responders

n of sampled patients

% of sampled patients

Patient not assigned to a classification or quintile

322

0.3

2,170

0.4

Number of responses analysed

9.14 The number of responses that have been analysed for each question is often lower than the total number of responses received. This is because not all of the questionnaires that were returned could be included in the calculation of results for every individual question. In each case this was for one of the following reasons:-

  • The specific question did not apply to the respondent and so they did not answer it. For example if they did not see a nurse in the previous 12 months and therefore did not answer questions about their experience with the practice nurse(s)
  • The respondent did not answer the question for another reason (e.g. refused). Patients were advised that if they did not want to answer a specific question they should leave it blank
  • The respondent answered that they did not know or could not remember the answer to a particular question
  • The respondent gave an invalid response to the question, for example they ticked more than one box where only one answer could be accepted.

9.15 The number of responses that have been analysed nationally for each of the percent positive questions are shown in Annex A.

Weighting

9.16 Results at Scotland, NHS Board and CHP level are weighted. Weighted results were calculated by first weighting each GP Practice result for each question by the relative practice size. The weighted practice results were then added together to give an overall weighted percentage at Scotland, NHS Board and CHP level. The weight for each practice is calculated as the practice patient list size (of patients aged 17 or over and therefore eligible for being included in the sample survey) as a proportion of the entire population (Scotland, NHS Board or CHP) of patients eligible for inclusion in the survey.

Many of the questions in the survey relate to the specific practice that the patient attended during 2013/14. Therefore, weighting the results in this way provides results more representative of the population (at Scotland, NHS Board or CHP level) than would be the case if all practices (small and large) were given equal weighting in the calculation of aggregation results.

Percentage positive and negative

9.17 Percent or percentage positive is frequently used in the reporting. This means the percentage of people who answered in a positive way. For example, when patients were asked how helpful the receptionists are, if patients answered "Very helpful" or "Fairly helpful", these have been counted as positive answers. Similarly those patients who said they found the receptionist "Not very helpful" of "Not at all helpful" have been counted as negative. Annex A details which answers have been classed as positive and negative for each question.

9.18 Percentage positive is mainly used to allow easier comparison rather than reporting results on the five point scale that patients used to answer the questions. There is also a belief that differences between answers on a five point may be subjective. For example there may be little or no difference between a person who "strongly agrees" and one who "agrees" with a statement. In fact some people may never strongly agree or strongly disagree with any statements.

Quality of these statistics - Sources of bias and other errors

Sampling error

9.19 It should be kept in mind that because the results are based on a survey of sampled patients and not the complete population of Scotland, the results are affected by sampling error. More information on sampling can be found in chapter 4. However due to the large sample size the effect of sampling error is very small for the national estimates. Confidence intervals (95%) for the percentage of patients responding positively to a particular statement are generally less than +/- 1%.

9.20 When comparisons have been made, the effects of sampling error are taken into account by the tests for statistical significance. Only differences that are statistically significant, that is that they are unlikely to have occurred by random variation, are reported as differences.

Non-response bias

9.21 The greatest source of bias in the survey estimates is due to non-response. Non-response bias will affect the estimates if the experiences of respondents differ from those of non-respondents.

9.22 We know that some groups (e.g. men and younger people) are less likely to respond to the survey. This is partly explained by the fact that men and younger people are less likely to visit their GP practice. We also believe that there are differences in the reported experiences of different groups (e.g. from the 2011/12 Patient Experience Survey of GP and Local NHS Services we found that younger people tend to be less positive about their experiences and women tend to be less positive[9]). An example of the effects of this type of bias is that with more older people responding, who are generally more positive, the estimates of the percentage of patients answering positively will be slightly biased upwards.

9.23 The comparisons between different years of the survey should not be greatly affected by non-response bias as the characteristics of the sample are reasonably similar for each year.

9.24 Some non-response bias is adjusted for by weighting the results. The response rates differ between GP practices, but weighting the results by patient numbers means that GP practices with lower response rates are not under-represented in the national results. Results could have been weighted by patient factors such as age and gender.

Other sources of bias

9.25 There are potential differences in the expectations and perceptions of patients with different characteristics. Patients with higher expectations will likely give less positive responses. Similarly patients will perceive things in different ways which may make them more or less likely to respond positively. When making comparisons between NHS Boards it should be remembered that these may be affected by differences in patient characteristics. This should not affect comparisons between years.

Sample design

9.26 The survey used a stratified sample design rather than a simple random sample approach. Those included in a simple random sample are chosen randomly by chance giving an equal probability of being selected. Simple random samples can be highly effective if all subjects return a survey; giving precise estimates and low variability. However, simple random samples are expensive and cannot guarantee that all groups are represented proportionally in the sample.

9.27 Stratified sampling involves separating the eligible population into groups (i.e. strata) and then assigning an appropriate sample size to each group to ensure that a representative sample size is taken. This survey was stratified by GP Practice and was based on a disproportionate stratified sample design because the sampling fraction was not the same within each of the practices. Some practices were over-sampled relative to others (i.e. had a higher proportion of their patients included in the sample) in order to achieve the minimum number of responses required for analysis (please see Chapter 4 for more information on the sample size).

Design factor

9.28 Results at National, NHS Board and CHP level were weighted by relative size of each practice (stratum). One of the effects of using stratification and weighting is that this can result in standard errors for survey estimates being generally higher than the standard errors that would be derived from an unweighted simple random sample of the same size.

9.29 Features of using a disproportionate stratified sampling design can affect the standard errors that are used to calculate confidence intervals and test statistics. Calculating the design factor (Deft) can tell us by how much the standard error is increased or decreased compared to a simple random sample design, given the design that we have used. The design factor has been incorporated into the confidence interval calculations at National, NHS Board and CHP level (please see Annex D for more information).

Design effect

9.30 The design effect (Deff) is the square of the design factor and can tell us how much information we have gained or lost by using a complex survey design rather than a simple random sample.[10] For example, a design effect of two would mean that we would need to have a survey that is twice the size of a simple random sample to obtain the same volume of information and precision of a simple random sample. A design effect of 0.5 would mean that we would gain the precision from a complex survey of only half the size of a simple random sample. The design effect has been incorporated into the test statistic calculations at National, NHS Board and CHP level.

Confidence Intervals

9.31 The reported results for the percentages of patients answering positively have been calculated from the patients sampled. As with any sample, if we had asked a different group of patients, we could have ended up with different results.

9.32 Confidence intervals provide a way of quantifying this sampling uncertainty. A 95% confidence interval gives a range that we can be 95% sure contains the "true" result i.e. the results we would have obtained had we asked the same question to all of the practices' patients.

9.33 If, for example, the percentage positive result for a particular question is 80% and the confidence interval is +/-5%, this means we are 95% sure that the result should be between 75% and 85%.

9.34 Confidence intervals have been calculated for the percent positive results of question 27 (the overall rating of care provided by the GP surgery) by NHS Board in table 14. This provides an example of the accuracy of the estimates provided in Table 14 of the National report. More details on this calculation are available in Annex D.

Table 13 Percentage of patients rating the overall care provided by their GP surgery as excellent or good, by NHS Board, with 95% confidence intervals

NHS Board

Percentage

95% confidence interval

Lower Limit

Upper Limit

Ayrshire and Arran

86.8

85.8

87.7

Borders

89.8

88.4

91.2

Dumfries and Galloway

89.9

88.7

91.0

Fife

86.0

85.0

87.0

Forth Valley

87.3

86.3

88.3

Grampian

85.4

84.4

86.3

Greater Glasgow and Clyde

88.7

88.2

89.1

Highland

88.7

87.9

89.6

Lanarkshire

83.1

82.3

83.9

Lothian

85.1

84.4

85.8

Orkney

97.3

95.9

98.8

Shetland

82.2

78.8

85.6

Tayside

88.7

87.9

89.5

Western Isles

89.7

87.3

92.2

SCOTLAND

86.8

86.5

87.0

Tests for Statistical Significance

9.35 A result can be described as statistically significant if it is unlikely to have occurred by random variation. Testing for statistical significance allows us to assess whether there have been significant increases or decreases in performance between one time period and another. Similarly it can allow us to assess whether a result for an NHS Board or CHP is significantly higher or lower than the result for Scotland as a whole. The effects of sampling error (please see section 9.19) are taken into account by the tests for statistical significance.

9.36 Where possible, comparisons with percent positive results from the 2011/12 GP patient experience survey have been made at NHS Board, CHP and practice level within individual reports. Scores which have significantly improved since the 2011/12 survey have been reported asplus. Scores which have significantly worsened since the 2011/12 survey have been reported asminus.

9.37 Comparisons with the 2011/12 percentage positive results at national level are discussed within the national report on the basis that differences are statistically significant.

9.38 Comparisons with the 2011/12 national (i.e. Scotland) percent positive results have also been made at NHS Board, CHP and practice level and can be found within the individual reports. Differences which are statistically significant are shown as plus where the percent positive score is significantly higher than the national average; and minus where the percent positive score is significantly lower than the national average.

9.39 All significance testing was carried out at 5% level. The normal approximation to the binomial theorem was used for this. This approach is equivalent to constructing a 95% confidence for the difference between the results.

9.40 As discussed in section 9.29, when calculating the test statistics at national, NHS Board and CHP level, the standard error has been multiplied by the design factor (Deft).

9.41 More details on tests for statistical significance are available in Annex E.

Outcomes of NHS treatment indicator

9.42 The Quality Strategy emphasises the importance of measurement, and a Quality Measurement Framework has been developed[11] in order to provide a structure for describing and aligning the wide range of measurement work with the Quality Ambitions and Outcomes. As part of this framework, 12 national Quality Outcome Indicators have been identified, which are intended to show national progress towards achievement of the Quality Ambitions.

9.43 One of these twelve Quality Outcome Indicators relates to Patient Reported Outcomes. This is reported in chapter 11 of the national report.

9.44 An average score is calculated for each respondent based on the outcomes questions they have answered. (Patients answering none of the 3 questions are not included.) These average scores are weighted by the number of patients registered at each GP practice to give scores for NHS Boards and Scotland.

9.45 The three outcomes questions and how the responses were scored are presented below.

  • In the last 12 months, have you received NHS treatment or advice because of something that was affecting your ability to do your usual activities? …how would you describe the effect of the treatment on your ability to do your usual activities?

Table 14 Scores for outcomes for something affecting ability to undertake usual activities

I was able to go back to most of my usual activities

100

There was no change in my ability to do my usual activities

50

I was less able to do my usual activities

0

It is too soon to say

Don't include

  • In the last 12 months, have you received NHS treatment or advice because of something that was causing you pain or discomfort

Table 15 Scores for outcomes for something causing pain or discomfort

It was better than before

100

It was about the same as before

50

It was worse than before

0

It is too soon to say

Don't include

  • In the last 12 months, have you received NHS treatment or advice because of something that was making you feel depressed or anxious?

Table 16 Scores for outcomes for something making patients feel depressed or anxious

I felt less depressed or anxious than before

100

I felt about the same as before

50

I felt more depressed or anxious than before

0

It is too soon to say

Don't include

Quality assurance of the national report

9.46 A small group of Scottish Government policy leads and practitioners were sent a draft version of the national report for quality assurance. Feedback included suggestions on ways in which to report data as well as comments about the context for the survey. These were taken into account in finalising the national report. In addition ISD Scotland carried out quality checks of all figures used in the report.

Contact

Email: Andrew Paterson

Back to top