How Should Online Customer Service Be Measured?

While measuring customer service at brick & mortar stores appears second nature to many, STELLAService rating system measuring customer service online analyzes four service areas: phone support, e-mail support, shipping performance and returns/refunds performance. In July, JCrew.com ranked first.

The following is a brief description of the defining metrics of the four service areas and JCrew.com’s July rankings:

  • Phone support: The service area grade is based on 12 distinct metrics such as speed of answer, product knowledge and professionalism, including whether representatives were engaged, interested, respectful and courteous. JCrew.com ranked first across all 100 retailer websites STELLAService tracks with a particularly strong score for issue resolution and professionalism. Among stores in the Apparel/Accessories sub-category, LLBean.com ranked fastest among all retailers at connecting customers to a live agent and came in second overall.
  • Email support: The grade is based on 15 metrics such as response time, grammar and spelling and first e-mail resolution. It also analyzes how well representatives answered a question, and if their replies were personalized and expressed empathy. In the Apparel/Accessories sub-category, JCrew.com came in second behind LLBean.com, which answered e-mails on average 10 minutes faster than any of the other 100 retailers.
  • Shipping performance: Measures include delivery time, package fit and product accuracy. Points are added for overnight shipping, free shipping and in-store pick-up. JCrew.com, tied for third in its sub-category with Gap.com, particularly benefited by improving its average time to deliver packages by two full days from June.
  • Returns/Refunds performance: Return notification time, processing time and e-mail communication are among the metrics measured, as well as the availability of free return shipping, pre-paid return label in package, and in-store returns. In the Apparel/Accessories category, JCrew.com was tied for second alongside LLBean.com and EddieBauer.com. Net-A-Porter.com ranked first because it processed refunds more than a day faster than its closest competitor in the vertical.

    Across all 100 retailers STELLAService tracks in its survey, the top ten for July were:

  1. JCrew.com
  2. LLBean.com
  3. Net-A-Porter.com
  4. Nordstrom.com
  5. SaksFifthAvenue.com
  6. Sephora.com
  7. Shop.lululemon.com
  8. Store.Apple.com
  9. Wayfair.com
  10. Zappos.com

High online customer service levels, however, don’t appear to be a huge driver of loyalty with consumers. Of STELLAService’s top ten performers, only three—LBean.com (14), Zappos (20) and Nordstrom (24)—made STORES Magazine’s just-released annual Favorite 50 Online Retailers survey.

STORE’s top-10 favorite retailers, based on a survey of 5,600 consumers, were: Amazon.com, Walmart.com, eBay.com, Kohls.com, BestBuy.com, Target.com, JCPenney.com, Macys.com, Sears.com and OldNavy.com.

Discussion Questions

What metrics should be used to measure online customer service? Of the four used by STELLAService (Phone, E-Mail, Shipping, Return), which ones should receive the most and least weighting?

Poll

5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jason Goldberg
Jason Goldberg
10 years ago

The STELLAService ratings (now owned by eBay, BTW) are helpful, but they are focused on granular sub-components of the overall experience. So to get an aggregate score, they rely on the metric designer to decide how influential each component of the experience is to the shopper (i.e. the weighting).

The granular metrics are very useful for helping individual stakeholders improve their areas of responsibility, but aren’t so useful for defining how their sub-component contributes to overall customer perception.

I prefer metrics that rely on the shopper to decide what elements of the experience are most important, like the American Customer Satisfaction Index, NRF Fav 50, and Net Promoter Scores (or the Foresee WOMI variant).

Cathy Hotka
Cathy Hotka
10 years ago

All these metrics make sense, but I’d like to add another. It’s incredibly annoying to visit a site, then be served ads about it for the next month while on other sites….agreed?

Gordon Arnold
Gordon Arnold
10 years ago

The omission or poor performance of any part of the customer service ares in this discussion will reflect on the company as a whole and be seen only in silent support of the derogatory market comments. Customer service must be maintained in total to be successful. Any and all needs or weakness will make all other efforts moot in the report to the market by the consumer. Keeping customer service streamlined, simple and fully documented will provide plenty of demonstration to the public that a company is honest and fair to the consumer. This direction with an open book policy will also be a means of public defense against the professional complainers, which are now found throughout the retail markets.

Ed Rosenbaum
Ed Rosenbaum
10 years ago

They all make sense to me. But I was amused to read how high Walmart is ranked for its online customer service. It is certainly different than their brick & mortar performance.

Jordy Leiser
Jordy Leiser
10 years ago

We’re excited to have STELLAService included in the discussion. Thanks, RetailWire!

First, I want to clarify a few points. STELLAService remains an independent business—we are not owned by eBay.

Next, the methodology outlined here is applied to our STELLA Monthly Benchmarks report, which uses a basket of metrics we analyze to inform the rankings across those four categories. However, our daily measurements of online retailers include other service channels and many other valuable metrics that businesses use to assess their own performance and benchmark against key competitors.

And, Jason is correct—our data reflects business performance, not consumer emotion. It shows how Walmart stacks up against Target, how Macy’s stacks up against Nordstrom and how they all compare to perennial top performers like Zappos and L.L.Bean.

The trouble with user-driven survey data is that it typically only reflects the extremes—the best and worst experiences. It’s about an individual’s feelings, not necessarily whether a business hit the mark operationally. Two individuals could have the same experience yet describe and “measure” the quality very differently. We closely emulate the path of the consumer to objectively collect our data, but we’re ultimately after a true apples-to-apples performance comparison, remaining completely unbiased and objective through the process (our processes are even audited by KPMG).

BrainTrust