The Analysis & Insight team is pioneering innovative techniques that have the potential to provide cost savings and lead the way in Government’s ‘digital by default’ approach. As part of this, we have been evaluating whether data from the Community Life Survey, which uses a face-to-face interview methodology, could be obtained with an online survey.
What did we test?
The Community Life Survey data forms Official Statistics, so it is important that the data are robust, representative and reliable. Since 2012, we have been designing experiments with the current contractor TNS BMRB to answer three main questions:
- Do people respond to an online survey?
- What is the profile of respondents (are there any demographic biases)?
- Do results differ compared to the face-to-face survey?
What was the response profile?
With incentives, around 25% of those asked to participate did so. Although this is lower than the 60% participating in the face-to-face survey, this illustrates that it is possible to recruit people to an online version of this survey. As with any survey, some demographics were underrepresented in the online survey, such as young people, ethnic minorities and religious groups, but this was to a much lower extent than expected.
One of the main challenges was recreating the random sampling approach with an online survey. As a first approach invitations were sent to a representative selection of demographics and we asked the person with either the next or the last birthday to complete the survey. Although successful in the vast majority of cases, there was some concern over self-selection, therefore as a next iteration we asked all adults in selected households to complete the questionnaire.
This approach did not impact the proportion of people likely to respond. However, the demographic profile of respondents was more similar to the overall population and was as representative as the face-to-face survey, suggesting this approach should be adopted in future online surveys.
Did the survey measures differ?
The results from both experiments revealed that over two thirds of measures were significantly different (statistically), although the majority of these were small (< 5 percentage points). However, these were not due to differences in the demographics of people responding – we asked people who had previously completed the face-to-face survey to complete the online survey, and even though the demographics were the same, the measures still differed.
This raises an interesting question: are face-to-face or online results more accurate? When being interviewed, people might present themselves in a way that may be viewed more favourably, for example overstating how generous they are. This is known as social desirability. Alternatively, people may place less importance on the accuracy of their answers when responding online. It would be interesting to try to answer this in future experiments.
Overall, these results are encouraging: it is possible to achieve a representative sample with online surveys, and any remaining demographic differences have very limited impact on the results. However, there remain some issues with an online survey, in particular the differences in measures, meaning that if we were to adopt this methodology there would be a break in the time series (dating back to 2001). Ahead of making a decision about future approach, we plan to publish these results in full. Are you also considering running an online survey? If so, it would be great to hear from you!