Fortune 300 Car Dealership Group Grows QA Team Output by 7% While Reducing Time Spent by 75%

After a sudden transition to remote work, the company needed to enable its team to evaluate a higher volume of calls with fewer headcount while maintaining CX quality.

7.6%
increase in evaluations completed within 30 days
100%
voice calls monitored
75%
reduction in time to evaluate calls (from one hour to just 15 minutes)

Challenges

Prior to using Observe.AI, the company's QA process was slow and tedious. QA analysts spent an average of one hour per call on monitoring and evaluation, collecting their feedback, and sharing the relevant recordings. To do so, analysts had to juggle several spreadsheets and systems.    

Here’s what that process looked like prior to Observe.AI:

  1. QA analysts manually searched for calls by agent name via a system called Oasys
  2. The analyst annotated the call for the agent
  3. The analyst reviewed the call recording two or more times
  4. Agent performance was scored in an Excel workbook
  5. Upon completion, the QA analyst would then deliver the call recording and Excel workbook to the agent and their team leader via email

Another component of this process was ensuring that the call at hand fit the criteria for scoring since only certain types of support calls were reviewed by QA analysts for quality purposes. With the process mentioned above, it was difficult for QAs to quickly find the best calls to review and score. 

In 2020, the company, like many of us, underwent an unexpected workforce reduction. While they originally had a team of six quality analysts who split an already busy workload, their team was suddenly reduced to four. That meant each analyst was in charge of completing more call reviews and evaluations, as well as providing feedback to agents who served more than 238 locations.

With additional pressure to avoid overtime costs, the company needed a better way to set its QA team up for success and run a faster, better evaluation process. Ultimately, it needed to ensure that enough detailed feedback got to its agents so they could deliver a positive CX.

Solution

Evaluating more agents in less time

With Observe.AI’s Conversation Intelligence AI platform, the company was able to evaluate more calls and agents in less time, working smarter rather than harder to hit their quota. The role that automation played can not be understated.

As an example, the company used conversation intelligence AI to expedite the process of finding the best calls to review. Then, the company was also able to quickly surface highly accurate transcripts and interactions within those calls that its analysts could review deeper. Rather than juggling multiple tools, analysts could easily access an audio player and scorecard in a single view, as well as provide coaching tips to agents directly with a time-stamped transcript. This made the evaluation process much more relevant and contextual for agents while helping analysts visualize why they scored the call the way they did. 

Visibility promotes scoring that agents trust

With its previous scoring system, the company faced difficulties in setting an objective standard for how scores were calculated. Because calls were listened to multiple times, the previous process increased the chance of human error. With Observe.AI, the process became more accurate, data-driven, and transparent. During a period of a lot of transition, this enabled the company to build trust and transparency with its team while also better celebrating top performers to motivate them.

Better agent training with ‘moments’

The ‘Moments’ feature in Observe.AI allowed the company to pinpoint key areas of interest in conversation that should be analyzed at a deeper level. Through its tonality-based sentiment detection, Observe.AI’s platform identified patterns like speech volume, speech rate, word use, and more that can be indicative of a good or bad Customer Experience.

Examples of how the Company Uses Moments

  • Process adherence: Is the agent verifying customer information, such as the make and model of their car? Are they asking if they’ve used our services before?
  • Call drivers/conversions: Is the agent making at least two attempts to encourage customers to make an appointment with their vehicle?
  • Empathy: Are agents sounding robotic, or are they answering customer concerns with empathetic words, tones, and phrases?

Results

With Observe.AI, two of the company's analysts were able to increase their output to 197 evaluations in a single month, exceeding their monthly goal for June.

Disconnections between evaluators and team leads who delivered coaching to agents resulted in agents submitting disputes on their evaluation scores 5-10 times per month, on average. Disputes would add an additional 10+ hours to the team workload.  

This meant a supervisor then had to go in and listen to an entire call to re-score it, adding 10+ hours to team workload. In the first month of going live with Observe.AI, they received zero dispute requests. That’s because agents were able to see their own call recordings and performance reports, including how they were performing compared to teammates on a leaderboard, via Observe.AI.

OVERVIEW
The company is an international, Fortune 300 automotive retailer, owning and operating 186 automotive dealerships, 242 franchises, and 49 collision centers in the United States, United Kingdom, and Brazil.
CHALLENGES
After an unexpected workforce reduction, the company's QA team needed to find a way to review the same number of calls in the same amount of time with fewer analysts to get feedback to its hard-working agents.
SOLUTION
The company uses Observe.AI to complete more and higher-quality agent evaluations, uncover compliance gaps, and improve agent enablement.
FOUNDED
1997
HQ
Texas
Read More Stories

Deliver breakthrough results with the Intelligent Workforce Platform

SCHEDULE A DEMO