written by Keith Phillips
I recently had my car serviced. When I was leaving the car dealership I was given a card from the receptionist. The card explained that the auto company may be contacting me to take a survey about my experience with the service center. If the service was satisfactory, then I should answer the question as “excellent”, because “excellent”, means that they did their job. Defining “excellent” in set terminology, was this information being given to produce a consistent response from customers? “Hmmmmm.” I didn’t know that “doing your job” meant you did an “excellent job”, I thought to myself as the receptionist explained what was already written on the card. She didn’t realize she was talking to a Research Methodologist, so I was all ears on this one. She continued by explaining, if I was dissatisfied with anything to give them an opportunity to sort it out prior to taking the survey.
Immediately, I thought of the other dealerships that were being measured in this customer satisfaction survey. Where they using the same techniques? Certainly, the efforts of the car dealer where to increase their satisfaction score, but I also had to consider, was it fair? If the purpose of the survey is to measure satisfaction, in order to understand where people are dissatisfied, and then address those areas to improve satisfaction; what better way to improve satisfaction than having an immediate opportunity to address any concerns with a branch manager? After all, even though it was a survey, was it sample or was it something everyone got to fill out?
The entire situation bothered me from a methodological point of view. I knew these numbers were being aggregated and compared somewhere. The dealerships that did not behave the same would be scoring lower. Of course, if I talked to the branch manager about it, he may say that they all “do it”.
I did some digging online and I found bloggers and customers complaining about their dealer begging them to give them all high marks, practically filling out the questionnaire for them. My understanding is that the satisfaction scores are tied to some type of “reward” or “incentive” the dealer receives. Now it is coming full circle. Giving incentives/rewards can lead to problems with data quality. We have seen it with participants, who are motivated only for the incentive. In the case of the car survey, the rewards are biasing the people who are administering the survey. You have to wonder if the system works for the car companies, when the desired end is in fact increased satisfaction.
Still, there is a line that is being crossed. If someone is being pressured into giving a higher score and they are truly less satisfied, then that is just bad business for everyone involved. At the end of the day, the incentive/rewards given to the people at the dealership who conduct the survey is creating a bad survey practice. This may ultimately come back to hurt some of these dealerships, because the unbiased information from the survey itself may be more valuable than the incentive/reward they are receiving. They just might not realize it.