Surveys, Survey Fatigue and getting Feedback

Abstract

Getting feedback can be difficult in a business environment and a survey seems like a good way to do this, but you have to be careful and be sure of your own expectations.

Be clear on what it is you want from the survey – they are not good voting systems, and if you don’t circle back to people afterwards, they will stop giving you feedback on the surveys.

Too many surveys in general can easily lead to survey fatigue and missed opportunities. There’s also a thing I call ‘survey blindness’ where people see so many surveys they don’t know which ones they’ve responded to, and miss some for that reason.

There are times where it’s quicker and just better to take the time to talk to your audience in person, individually, or in small groups.

We found that people have a better perception of surveys, when they can learn what changes were driven from them.

Introduction

“Are we gaining wisdom from our surveys?”

We started asking that question in our team several months ago. As a business we use surveys a lot. Every All Hands and other large gathering or event had its own survey.

Even in our relatively small team, we were responsible for a lot of surveys. Sometimes we called them ‘feedback forms’, but they looked and were basically surveys too.

Sometimes we would consider these returned surveys a form of popularity indicator for the event. We learned that was often a mistake too – how many people respond often has no correlation to the quality – for better or worse – of the event.

As a brief exercise I asked the team to really look at our surveys, what their history was, how they had been designed, what their goal was and what we did with the information gathered from them.

We had a lot of different findings.

Firstly, I’m not saying all surveys are bad – I’m saying that in our situation, they’d become a bit of a shiny hammer, and many things requiring interaction or ‘feedback’ had become nails.

How did I come to question this? Simply from the response I saw when the word ‘survey’ was mentioned in a conversation – a rolling of eyes and a drop in enthusiasm were common responses. That, and response rates for some larger events went into single percentage figures, with no response to open text fields in some of the larger surveys. This also proved our point that they weren’t effective voting systems.

These are fairly obvious signs of misuse or overuse.

I don’t know what came first – the nail or the hammer. Perhaps someone found Google Forms and was asked to find out what people thought of an onboarding session and so a survey was created and used on a monthly basis – who knows.

Recently of course, the COVID situation and two plus years of working from home for most people has meant that getting someone’s opinions casually in person, or taking the temperature of a room was suddenly a lot more difficult, so looking at a simple questionnaire to see if a meeting or event was working and how you could improve it, seemed like a good idea.

We’d also started using them as a KPI (Key Performance Indicator), to infer popularity or happiness with an event. This is also very close to viewing the number of survey forms submitted as a kind of voting system.

Design

I’d sat through enough of these presentations, and clicked through enough of these surveys to think to myself ‘Where does this go?’, ‘Who owns it?’, ’Why do they need this information?’ and ‘Where did my comments go?’.

This really wasn’t immediately clear to me – even for ones my team owned. I wasn’t sure how I could action the feedback, or how to communicate that to the people who submitted the surveys in the first place.

We’d ask how useful the event was, was it too long, was it too short. Were the subjects interesting for you?

Often we’d let people rate this on a scale from 1 to 5, where 1 was ‘useless’, and 5 was almost prescient. Like so many rating systems, it tended to be skewed, especially when their name was on the front of the survey, a four or five rating was pretty common.

Often when there was a text field in the design, they were empty, so we didn’t have a ‘why’ to much of the feedback we did get.

Also, all these similar forms all blended together after a while as the questions were quite generic.

The design of some forms software of course lends itself to this style since stars or ‘really useful’ to ‘very unhelpful’ are often in templates, and sometimes we think more questions will get more granular information. That’s not always true.

There are an infinite number of reasons to want a survey or feedback on a myriad of topics, so we knew there was no true right or true wrong answer for design, but we knew things could be better.

Despite how it looks, we realised that a survey is a two-way tool – if we want good information in, we needed to give information out, to make the whole process better.

To be better, on a basic level we needed to make sure the recipient understands:

  • Why we’re asking them to fill in the form.
  • Who the form is for.
  • What will happen with the information.
  • When and how they can expect feedback on actions from the survey.

We started by looking at the value of the data we wanted, and how we planned to act on it.

For example, for a training seminar, instead of asking ‘Was it too long?’, better questions might be:
‘Was enough time given to explaining the system so that you could easily understand it?’
‘Did the instructor spend too little or too much time on [microservices], [cloud integrations] [security aspects]?’

Instead of asking generic questions, we should take the time to tailor the questions to the event more, and catch outliers with text fields.

This might be obvious for regular events, but thinking about it, why not do that to get better data? There is a caveat here – if you want repeatable data from these repeating events, perhaps over the course of a year, you may have to have core questions, and session specific questions.

Talking to people

Around this time, as part of our developer experience works, we decided to sit down with every single one of our Engineering Managers (EMs) and have a 30 minute fairly unstructured conversation, which we called our ‘Outreach and Visibility’ project.

We weren’t that far through this process when we found that surveys had been on the EM’s minds too.

‘Why so many surveys?’, was a common question, but the most common theme was: What changes had been driven from these surveys?

This partly reflected our own feelings – we had many surveys, but communicating change, what about that? We actually do a lot of kaizen on our internal services – updating, revamping, and sometimes killing a service when it’s no longer needed, so where was the disconnect? Why were we keen to send surveys out, and even look at and analyse the results and even sending those results out, but we weren’t telling people what we’d changed because of those results.

In many cases it was that simple – we hadn’t told people why we’d made a change, especially when it was from a feedback form or survey from those people! We needed to change that – we needed to talk about it more when wrapping up presentations, and give examples in Slack channels; ‘Thanks for the feedback. From this we changed how we presented updates to the Engineering Ladder making it clearer’.

Also, people liked these regular Outreach and Visibility conversations, so we’re continuing to offer them to get even more feedback and hear concerns that way too, feed back what we’ve been doing from the feedback, and illustrate improvements. At the same time we’ve removed some feedback forms, encouraging direct feedback in Slack.

Conclusion

Getting information from the business is vital in a fast moving industry. Surveys are valid tools for doing this, but like any tool, we need to design and use it correctly for each task, since a ‘one-size-fits-all’ approach can fatigue end users, degrading the quality of any feedback.

Ultimately, there is always the option of talking to people. In this hybrid business world, that might be in a video conference, it might be in the office, and it might be one to one or a small group, but it should always be available.

Whichever way we choose, it’s vital that we close the circle by explaining to participants what we changed because of what they said.

  • X
  • Facebook
  • linkedin
  • このエントリーをはてなブックマークに追加