Are retail surveys hopelessly flawed?

Are retail surveys hopelessly flawed?

A study by Interaction Metrics has concluded that most major retailers are horrible at conducting surveys.

The study examined customer satisfaction surveys of 51 top U.S. retailers (excluding grocers and warehouse clubs). Based on an objective evaluation of 15 elements, the average survey scored a 43 out of 100 points, an F grade.

Retailers were found to be worst at “Information Accuracy,” with a 27 percent score on average. Measuring the neutrality or bias of data, Information Accuracy was worth 50 percent of the overall survey score. Infractions in this category included leading questions, biased language, double-barreled questions, title neutrality, question relevance and faulty scales.

According to the study, about a third of all questions led customers to give answers that companies wanted to hear. The authors presented an example of a leading question from Ace Hardware: “How satisfied were you with the speed of our checkout.” An example of forced wording from Gap included asking customers whether they agree with the statement, “The look and feel of the store environment was very appealing.”

In an example of faulty scale from Dollar General, the chain asked about customer service levels and provided five answer choices: extremely satisfied, very satisfied, somewhat satisfied, somewhat unsatisfied, very unsatisfied. The midpoint winds up positively skewed.

Retailers scored a 57 in “Customer Engagement, which was worth 35 percent of the survey score. This category measured such things as thoughtful welcomes in surveys, the use of jargon, survey length, progress transparency and customization. With 23 questions on average, the surveys were found to be excessively long. A Nordstrom survey advertised it would take two minutes to complete but the 25 questions took four to five minutes.

Some smaller aspects being judged included:

  • “Branding Cues” (worth 10 percent of the score) graded style, spelling and grammar. The average score was 67.
  • “Ease of Access” (worth 5 percent) graded how easy it is to locate and take the survey. The average score was 69. Examples of infractions included asking introductory questions irrelevant to the customer experience and requiring receipt codes.

BrainTrust

"There are enough companies that can help with these efforts, so retailers need to take this work seriously enough to invest in their assistance."

Ralph Jacobson

Global Retail & CPG Sales Strategist, IBM


"...we needed to follow the “marketing guide to research” which states “never ask a question you don’t really want to know the answer to.”"

Ben Ball

Senior Vice President, Dechert-Hampe (retired)


"Retailers would be better off just using the resource they have at their disposal everyday — store traffic."

Dan Raftery

President, Raftery Resource Network Inc.


Discussion Questions

DISCUSSION QUESTIONS: Do you think most retailers intentionally use leading questions and biased language in customer satisfaction surveys or is it human nature to do so? Do you think most retailers have realistic expectations when designing surveys? How can customer satisfaction surveys be improved?

Poll

23 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Dougherty
Tom Dougherty
Member
7 years ago

They are out of touch because their methodology is ALL wrong. The surveys are self-selecting which means the results are NEVER projectable. Some have a margin of error of over 40 percent.

Research is a science. The questions MUST be thoughtful and not misleading. The methodology must be randomized and projectable to be of any value. Most retail surveys fail on both fronts.

Jasmine Glasheen
Member
7 years ago

There’s no doubt that retailers use biased language in conducting surveys. After all, they’re often gathering the statistics in hopes of collecting positive feedback from customers to be used in advertising campaigns.

Customer satisfaction surveys need to be shorter, two minutes max. To ensure balanced feedback, retailers can offer to randomly select a winner from the responses for a gift basket or store card. Customers need a small incentive (or the potential of one) in order to take the time out of their day to respond.

Ralph Jacobson
Member
7 years ago

We have found that the vast majority of customer satisfaction surveys done by retailers are not created with the help of experts in the topic. Invariably, the surveys take a biased approach. There are enough companies that can help with these efforts, so retailers need to take this work seriously enough to invest in their assistance. Bottom line, what value are survey results if you have biased questioning in the survey?

Dr. Stephen Needel
Active Member
7 years ago

Whether or not it’s human nature, all of these companies listed in the report are big enough to have research departments. In theory, there should be researchers in these departments who know how to write an unbiased survey. How to improve these — clearly, in-house researchers are either not being utilized or someone should be talking exit strategies with HR.

Sterling Hawkins
Reply to  Dr. Stephen Needel
7 years ago

There’s a new generation of survey software that uses machine learning and AI to improve messaging, correct for human biases and (maybe most importantly) suggest corrective action. It’s not a complete replacement for data scientists in its current form, but it is one more arrow in the quiver of survey tools retailers can be taking advantage of to improve quality, accuracy and completion rates.

Mark Ryski
Noble Member
7 years ago

While I believe that most retailers have good intentions, I agree that issues with survey design and data collection methodologies prevent them from gaining the true insight they seek. As noted in the article, it seems that scientific methodology has been replaced by “quick and dirty” convenience sampling and dubious, leading questions that produce misleading results. Instead of insight, these satisfaction surveys seem to be designed to give retailers a pat-on-the-back and validate what they’re already doing. I don’t believe this was always the case, but financial pressure and competing priorities have caused many retailers to reduce their research budgets, replacing robust research with marketing-driven surveys or even simple happy face/sad face capture systems.

Retailers should consider doing less of this type of faux-research and instead focus their limited research resources on conducting more formal studies conducted by research practitioners, not marketing interns.

Adrian Weidmann
Member
7 years ago

As important as measurement is in understanding your progress and efficacy, qualitative data is typically flawed. Like focus groups, surveys simply allow the respondents to think about their answers which immediately skews the results. Observing what people do via quantitative measurement is real and unbiased. While retailers don’t intentionally use leading questions, human nature will dictate the tone and intention of the questions. Also, respondents are typically polite and not brutally honest. Compounding the challenge is the power of the scribe! Those that author the questions and interpret the results wield tremendous power. Words and language can be easily misunderstood but actions are defined and clear.

Having had a lot of experience with quantitative measurement of shopper behavior, I can say that typically the resulting numbers are most often much lower that the client expected. Measurement is a double-edged sword — be prepared to respond to the reality as the emperor may not be wearing any clothes!

Dr. Stephen Needel
Active Member
Reply to  Adrian Weidmann
7 years ago

Adrian, I don’t see anything wrong with someone thinking about their answer to a question. Nor would I characterize survey data as qualitative, in the same category as focus groups. Behavioral data is great (it’s mostly what I do in my research), but it does not give you the “why” answer, which, in the examples in the article, the retailers seem to be working towards.

Max Goldberg
7 years ago

Retailers should keep surveys simple and fast. Consumers don’t want to spend five minutes completing a questionnaire. Ask a few simple questions. Make sure the questions are not biased. Thank customers. Simple. Fast.

David Livingston
7 years ago

Questions and language are not the issue. Many customers simply choose the first answer they see to get the survey over with. Customers have learned never to choose anything too negative or they will be prompted to provide details, further wasting their time. I don’t think retailers have realistic expectations because the people who design the surveys just want to justify their job. The best way to improve surveys is to eliminate them.

Ken Cassar
Member
7 years ago

My favorite customer satisfaction survey is Delta Air Lines’ question after you’ve used customer service, “Would you hire this customer service representative for your own company?” It is short and to the point. The mistake that retailers make is that they try to use customer satisfaction surveys to help diagnose everything under the sun. Keep it simple!

Doug Garnett
Active Member
7 years ago

I think customer survey work at retailers IS really messed up these days. But while these people critique the surveys themselves, the egregious error is in the process — both taking the surveys and using the results. The process makes most of the results entirely invalid.

I wrote a blog post about process, based on a one time experience I had at Walgreens with survey issues. There were so many comments from retail help on that post noting the dysfunctional ways surveys are used to punish employees, that I pulled their comments into a second post noting what W. Edwards Deming and Campbell’s law would tell us.

Retailers need to stop the constant dribble of poorly crafted surveys used in bad processes. Instead, execute more trustworthy periodic research that will go far deeper and reveal far more insight. What do you do in between? I recommend listening to what your employees in the stores (from managers on down) have to say. They are usually quite perceptive.

Shep Hyken
Active Member
7 years ago

Biased language is usually not purposely used in surveys. It’s due to creating bad questions. If I’m an executive analyzing information, I’m interested in the “why” behind the scores. Sure, I’m happy with great scores. I want great scores. I want to believe and know our stores are great at taking care of our customers. But it’s the average or poor scores that can give me insight into how my stores can improve. It would behoove me to have unbiased questions and answers to get me the answers I need (true data), versus what I want (good feelings from high scores).

Ben Ball
Member
7 years ago

I used to drive Ray Jones (former DHC colleague and research professional/purist) nuts by teasing that we needed to follow the “marketing guide to research” which states “never ask a question you don’t really want to know the answer to.” In my own practice, I refined that to “never ask a question you don’t already KNOW the answer to!” But it took working with a pro like Ray for me to learn that the key to knowing the answer in advance is all in how you ask the question and to whom.

Whether intentional or not, that bias — coupled with truly poor research designs — leads to our current state where even the most naive senior executive knows that, as one client put it recently, “surveys lie!”

I must give a nod to another colleague who shall remain anonymous. He walked into my office shortly after the election holding a sign that read “Polls suck!” Another very public example of how even the best techniques have either a.) a design bias (e.g. “we think more Republicans will vote this year so let’s weight our poll results accordingly”) or b.) a recency bias (e.g. “we will weight our results according to the exact profile of the 2012 electorate”) — both are destined to be wrong.

Trying to field good research is really, really hard. So let’s not throw the retailers under the bus with a smug “but our firm knows how to do it right!” quite so fast.

Dan Raftery
7 years ago

I learned how to conduct surveys from one of the masters, Bill Bishop, when he pulled me out of a supermarket in 1985. Back then we had very primitive techniques compared to what is readily available today. But we also had rigor and scientific rules, such as proving a question before using it, to name just one.

Over the years, surveys have proliferated as technology has made survey vehicles easier to use and expanded their availability. Now, anyone with a smartphone can develop and publish a list of questions. Combine that with the typical retailer trait of taking anything possible in-house and you have today’s environment — ubiquitous unscientific attempts to get inside the mind of the consumer. Retailers would be better off just using the resource they have at their disposal everyday — store traffic.

Lyle Bunn (Ph.D. Hon)
Lyle Bunn (Ph.D. Hon)
7 years ago

Such surveys are too often akin to lighting a match to illuminate a situation. Is there not much more value in assessing consumer sentiment/mood through technology? A larger group can be assessed in the context of the surroundings and experience, and the insights are real.

Ron X
Ron X
7 years ago

Developing and executing surveys is very, very difficult. Too many people think they can do it themselves and can properly interpret the results. To get useful insights from survey results costs more and too often retailers try to economize.

gordon arnold
gordon arnold
7 years ago

Reaction to a bad experience is the primary motivator for survey participation. These comments and experiences have little or no relevance in the search to make meaningful changes for the store experience by the typical consumer. The other inhibitor is indifference.

The present day solution is bribery coupons or raffle entries that still yield diluted results.

So what is the first step to remedy the problem of having poorly prepares and easily identified skewed surveys? A good start might be to keep a log on the number of people approached to participate. This should be performed at selected high volume cashier stations staffed by qualified individuals to eliminate the “no time for this” responses. A limit of three questions should seriously be considered. Follow up with an offer for all customers to speak with a manager right away if desired will allow for the unveiling of predominant irritations some of which may be previously undiscovered.

Attacking a problem at the starting point is always a good first step. Continuing to do so is good practice.

Herb Sorensen
7 years ago

The purpose of the surveys is NOT to learn actionable insights to improve, but to find grist for the PR mill. But there is a much deeper problem. This derives from the fact that most of life is under subconscious control, most reliably learned about in others, from their BEHAVIOR, not their words. This is what Neale Martin is talking about in “Habit: The 95% of Behavior Marketers Ignore.” It reminds me of Eliza Doolittle’s plaint, “Words! Words! I’m so sick of words! … Don’t waste my time, Show me!” (Don’t Talk At All! — Or, Eliza Goes Shopping!) I know that measuring behavior is 100X more difficult than asking about it. But it is 1000X more reliable — and insightful!

At the same time, there are objective facts that can be reliably learned by asking, like “How many children do you have?” or maybe “What is your zip code?” But most action is driven by subconscious HABIT, or feelings that the shopper may be poorly qualified to explain. So the blind lead the blind — and there is a great tumult in the ditch! 😉

Marge Laney
7 years ago

To keep surveys short, retailers often make questions too general which leaves them up to interpretation by the customer.

For example, “My fitting room experience was just right” was a question used by an apparel retailer to find out how well the customers liked their fitting room experience. The customer was asked to rank their experience from 1 to 5; 1 being the worst and 5 being the best. No other questions regarding the fitting room were asked.

When the responses were tallied the average for the question was, you guessed it, 3. What did that tell the retailer about their customer’s fitting room experience? Nothing.

Kenneth Leung
Active Member
7 years ago

The technology and methodology is there, but retailers don’t want to hear the truth in many instances. As Jasmine said, the survey is there not for improvement, but to generate marketing messages.

Martin Mehalchin
7 years ago

The rise of Uber and App Store ratings on a 5 point scale has completely changed consumer expectations around surveys. They expect surveys to be quick, one-click affairs, but they also appreciate the option to give free form feedback (which can then be mined using text analytics tools).

Retailers (and others) should not use consumer surveys as a “one size fits all” solution. They should instead employ a variety of techniques to achieve different aims.

Short, transactional consumer surveys are useful for a quick pulse check; they are best for comparing relative performance of different stores or teams and for closed loop approaches to intervene and “make it right” with detractors. To gain deeper understanding of consumer behavior or business performance retailers need to put in the work to conduct more extensive and scientific market research whether that be via a professionally designed survey, structured ethnographic research or many other established techniques.

Dave Nixon
7 years ago

Many retailers I have worked with for market research or validation demand to create, refine and even dictate the questions being asked, which creates a self-serving feedback loop, and simply tells a story that they want to hear. Companies need to build a culture of “safe” authenticity where honest feedback is rewarded and not feared. That culture would translate to more accurate research methods and content.