Customer Experience Metrics
In this section, we show you the key customer service metrics that give you a clear picture of how well your team is serving your customers. A good support operation resolves customer challenges efficiently and makes intentional choices about what constitutes acceptable wait times and bigger picture customer satisfaction numbers.
The Service Level, whether via telephone, chat or email, is how long it takes to respond to customers once they’re in queue. When establishing this metrics, make sure to specify exactly when that customer enters the queue and the clock starts ticking.
Typically, Service Level is expressed as a fraction of the percentile of interactions that are responded to in a given time. For example, an 80/20 Service Level means 80% of tickets get a response within 20 seconds.
Service Level = Percentile of Interactions Responded To In Time
The higher the Service Level, the more agents you need available and the more expensive it is. So a 90/10 Service Level is more expensive than an 80/20 Service Level, because you have to have enough support agents to prevent calls from waiting in queue during heavy volume. Those agents may be idle at times of low volume.
Response time SLAs for interactive engagements, such as telephone and chat, should be measured in seconds. Email or other asynchronous interactions should be measured in minutes or hours.
Most companies first decide how long a wait their customers can tolerate. Then, they compromise on the percentile of interactions that must fall within that time to control costs.
Service Level reports are often response times reported in percentiles. For example, your average response time may be 00:00:28, but seeing the percentiles is more informative.
ASA — Average Speed of Answer
ASA is how long it takes, on average, to respond to customers once they’re in queue. ASA is used in reporting Service Level adherence. There are a few things that need to be addressed when calculating ASA:
• Define how you currently measure your team
• Specify the trigger for when a customer enters the queue and the clock starts ticking
• Hours of operation can complicate things if customers can queue after hours — think email support
• Follow-up responses typically aren’t calculated in ASA
Interactive Voice Response (IVR) systems may be a good solution to help alleviate longer response times. They allow customers to interact with computers via voice or keypad tones and provide information that the system will process and use to provide information or solutions back to the customer. In regard to IVRs, consider:
- If you have IVRs
- If not, ask whether you need IVR’s
- If outsourcing — ask if your outsourced partner will use your IVRs or if you have to use their ticketing/chat/phone system
If possible, avoid customer service abandonment, which is when your customers start a ticket, get frustrated, give up and leave before a support representative gets to them. It’s measured in percentage and should be low, about 3%.
High abandonment could mean your Service Level is too low and customers give up holding in queue. It could also mean your workflow is frustrating to your customers and they give up or it could mean there’s a technical problem with your processing system. Take time to find the root cause of high abandonment rates.
One thing to consider: if you’re purposefully running a particularly lean operation and your customers are patient, you may find it more relevant to primarily focus on abandonment rate instead of ASA. In this case, you may have the goal of keeping abandonment rates low, instead of maintaining a fast ASA.
FCR — First Contact Resolution
FCR is one of the most important metrics for customer support. In most cases, FCR drives customer satisfaction more than any other customer support metric. FCR is measured as a percentage, such as 80%. That means that 80% of tickets are resolved on the first customer interaction.
One FCR formula looks like this:
FCR sounds simple, but to make it useful, you must decide what counts as being resolved on first contact.
Think through the following questions and how your operation will handle them:
- What if the interaction is transferred, then resolved?
- What if the interaction was misrouted to an outsourcer’s queue?
- What counts as a reopened incident?
Re-opened incidents have specific challenges to consider as well:
- What prevents agents from opening a new incident to game the system and keep their FCR artificially high?
You might have a rule that says anytime a customer contacts support again within 24 hours, it counts as a re-open regardless of what the agent does
- What prevents a customer from re-opening an old ticket for a new problem?
CSAT — Customer Satisfaction
It’s important to find a good way to measure CSAT, but it’s difficult to do well. Here are some reasons why:
- Opt-in/out surveys don’t offer a good sample
- Customer service agents may exploit means of preventing unhappy customers from completing surveys
- Your most valuable customers probably don’t want to be bothered with surveys
- Surveys are so ubiquitous today that many people avoid them
- Poorly worded questions may yield unreliable results
- Changing survey questions too often robs the value over time
Spend time developing a good strategy to measure customer satisfaction. There is a lot of scholarly work on how to conduct good customer satisfaction. Don’t take it lightly. Consider these options:
- Multiple channels for feedback (email, inbound/outbound phone calls, etc.)
- Remove biases (e.g. happy or unhappy customers are more likely to respond)
- Make sure you’re measuring what really matters:
- How satisfied are your customers with their service?
- What’s important to your customers and what can you do better?
- How loyal are your customers? (e.g. NPS)
- Consider incentives: incentives are tricky to get right without introducing bias, but should be considered
- Don’t default to a 0-10 scale for every question
- Keep it simple and don’t ask too many questions
- Measure over time
- Pay attention to the confidence interval and confidence level for your population and sample size
NPS — Net Promoter Score
Have you noticed how many surveys ask:
How likely is it that you would recommend our company to a friend or colleague?
That’s the question companies use to measure NPS. The idea is there are only three types of customers:
NPS was developed by and is a registered trademark of Fred Reichheld, Bain & Company, and Satmetrix, and was published in the December 2003 issue of Harvard Business Review article The One Number You Need to Grow.
- Promoters (score of 9–10) are loyal customers who would refer you to others and help grow your customer base
- Passives (score of 7–8) are satisfied but unenthusiastic customers who may choose another provider if given the chance
- Detractors (score of 0–6) are unhappy customers who can cause others not to become your customer
NPS is scored from -100 to 100, with negative scores meaning your customers are net detractors, and positive scores meaning your customers are net promoters. The formula is:
NPS is both a good and bad indicator because it tells you how well you’re performing at a high level, over time. But it doesn’t identify areas of strength or weakness by itself and usually lags other indicators. That said, it’s still important to keep NPS and tailor it specifically to your operation.