I rang one of my favourite companies on Friday – they are our national airline, and I am a very loyal customer. So much so that I would (and often do) pay extra to fly internationally with them. And I unreservedly recommend them to all my friends and family. So. At the beginning of the call I was asked if I would be happy to complete an automated telephone survey when the call was over. My answer? Absolutely!!!
The reason for my call was not straightforward, I was booking a ticket for my mother to visit us from the UK and I had lots of questions! The call went well – the call centre agent was fantastic, and when I hung up, sure enough my phone rang and I eagerly answered, looking forward to telling this company how awesome they are and particularly, on this occasion, how awesome the specific call centre agent had been.
The instructions told me the call would take just 2 minutes.I was asked 4 questions. For each question I was asked to give a rating between 1 and 5.
How did I rate the agent overall (5)
How did I rate the agent for their willingness to help and their authenticity (5)
How did I rate the agent for their ability to understand my needs (5)
How did I rate the agent for being knowledgable and skilled in dealing with my enquiries (5)
And then the closing statement: “Thank you, your feedback is important to us and will help us provide valuable insight and coaching to our call centre team.”
Really?? I was given no opportunity to leave a comment. I was given no open ended questions. In actual fact, it was impossible for me to give them any “valuable insight”. How could I possibly, based on 4 scores of 5 out of 5?
But let’s start at the beginning, because I have several problems with this survey, and they stem from common mistakes that many many companies make when surveying customers.Firstly there is the problem of tone: the survey was very bland. The tone was business-like, repetitive, and formal – completely unsuited to this airline’s brand, which is renowned for being friendly, a little irreverent and innovative. Their in-flight safety videos get millions of hits on You Tube, so the tone of the survey really didn’t fit with the overall style of their communications.
Secondly there is the issue of semantics, and I could write a whole other blog post on just this (in fact I probably will). But it is so, so important to have clarity in the language you use for your survey. “How did you rate the agent for their authenticity and willingness to help?” What is it this company expect their agents to do in order to demonstrate authenticity, and how do they expect their customers to assess it? What does authenticity even mean in this context? Unless the company clearly state what they mean, it’s impossible to draw conclusions from answers to this question. Each customer will give a rating based on their own personal interpretation of ‘authenticity’.
Another common mistake companies make is in inviting customers to decide whether or not to leave feedback before they have had the interaction. How do I know that I’ll have a noteworthy interaction and have anything insightful to convey afterwards? If you really want to get rich insight from customers, much better to eliminate the “noise” and target your feedback requests at customers who are more likely to have something to tell you!
But finally (at least for now) the whole idea that this airline are able to provide coaching and insight based on numerical scores just seems ridiculous. What meaning and significance can possibly be derived from scores out of 5. How does the conversation with the agent who consistently scores 1/5 for authenticity go? How can an agent become “more authentic” if you haven’t asked your customers what they think you mean by that?
If you really want to use customer feedback to drive improvement, you have to make sure that the quality and integrity of the data you are collecting is very, very good. How can you involve customers in your programme of improvement if you don’t give them an opportunity to speak freely and tell you what they really think?