Like many large employers the Civil Service runs an annual survey to find out the attitudes of its employees and our eighth survey will take place in October. We publish the results every year and you can find the 2015 results here.
We survey because:
- Our people are our most important asset – without them we can’t deliver the work of government;
- It gives everyone (whatever their grade) an equal voice; and
- By being confidential, staff can be completely honest.
Why a cross-Civil Service survey?
The survey operates a census approach with all staff working in participating organisations invited to take part. There are around 100 different organisations which take part and since 2009 we have been running one cross-Civil Service survey instead of having 100 different surveys for 100 different organisations. Taking this approach has helped us to:
- Realise economies of scale – there are substantial savings by having one contract;
- Gather consistent and comparable measures across the whole Civil Service by asking the same questions at the same time; and finally
- Enable cross-Civil Service learning and consistent accountability for senior leaders and managers.
In 2015 the census approach allowed us to produce over 10,000 reports for managers and teams meaning that action based on the survey could be taken at all levels – from Civil Service wide to individual team level across the Civil Service.
Adapting to learn and to improve
All employee surveys should adapt to reflect shifting attitudes and organisational changes over time and the survey has adapted several times since a single questionnaire was designed, piloted and launched in 2009. For example, since 2012 we’ve included questions on subjective wellbeing - the results can help us to recognise the signs and drivers of low wellbeing and provide targeted support to improve health, engagement, productivity and performance.
In 2014 we initiated a review of the questionnaire to ensure that it remains an up-to-date and relevant source of data. The review, in four stages, continued throughout 2015.
Stage 1: Written consultation
We invited survey managers and other stakeholders to share their views, in writing, on the existing question set. The written consultation was designed to identify strengths and weaknesses of the existing questions as well as potential new questions for testing.
Stage 2: Question testing
In response to the feedback we received, the second stage involved carrying out a programme of online question testing. Upon completing the 2015 survey a random sample of respondents were invited to answer a small number of further test questions.
Stage 3: Desk review
The desk review comprised established research techniques (including factor analysis, examination of face validity and correlation analysis) to enable us to evaluate each existing question.
Stage 4: Consultation in person
This stage involved interviews with survey respondents, workshops, and meetings with stakeholders to discuss our findings. Participants were offered the opportunity to identify which new questions they would like to include in future questionnaires.
2016 and beyond
The review has led to some initial changes to the survey this year, for example, the overall questionnaire will be shorter while additional questions will be included to give stakeholders further insight on bullying & harassment.
We’ve just started to compare our survey to those used in other countries (including Australia, Canada and the US) to see what we might learn from or share with them.
We’re looking forward to analysing and sharing organisations' survey results at the end of the year and will continue to review the questionnaire to ensure it aligns with the longer term vision for the Service. However, before making significant changes we are further considering the impact of disrupting the time series versus the need to capture emerging priorities.
Recent Comments