After a sudden transition to remote work, the company needed to enable its team to evaluate a higher volume of calls with fewer headcount while maintaining CX quality.
Prior to using Observe.AI, the company's QA process was slow and tedious. QA analysts spent an average of one hour per call on monitoring and evaluation, collecting their feedback, and sharing the relevant recordings. To do so, analysts had to juggle several spreadsheets and systems.
Here’s what that process looked like prior to Observe.AI:
Another component of this process was ensuring that the call at hand fit the criteria for scoring since only certain types of support calls were reviewed by QA analysts for quality purposes. With the process mentioned above, it was difficult for QAs to quickly find the best calls to review and score.
In 2020, the company, like many of us, underwent an unexpected workforce reduction. While they originally had a team of six quality analysts who split an already busy workload, their team was suddenly reduced to four. That meant each analyst was in charge of completing more call reviews and evaluations, as well as providing feedback to agents who served more than 238 locations.
With additional pressure to avoid overtime costs, the company needed a better way to set its QA team up for success and run a faster, better evaluation process. Ultimately, it needed to ensure that enough detailed feedback got to its agents so they could deliver a positive CX.
With Observe.AI’s Conversation Intelligence AI platform, the company was able to evaluate more calls and agents in less time, working smarter rather than harder to hit their quota. The role that automation played can not be understated.
As an example, the company used conversation intelligence AI to expedite the process of finding the best calls to review. Then, the company was also able to quickly surface highly accurate transcripts and interactions within those calls that its analysts could review deeper. Rather than juggling multiple tools, analysts could easily access an audio player and scorecard in a single view, as well as provide coaching tips to agents directly with a time-stamped transcript. This made the evaluation process much more relevant and contextual for agents while helping analysts visualize why they scored the call the way they did.
With its previous scoring system, the company faced difficulties in setting an objective standard for how scores were calculated. Because calls were listened to multiple times, the previous process increased the chance of human error. With Observe.AI, the process became more accurate, data-driven, and transparent. During a period of a lot of transition, this enabled the company to build trust and transparency with its team while also better celebrating top performers to motivate them.
The ‘Moments’ feature in Observe.AI allowed the company to pinpoint key areas of interest in conversation that should be analyzed at a deeper level. Through its tonality-based sentiment detection, Observe.AI’s platform identified patterns like speech volume, speech rate, word use, and more that can be indicative of a good or bad Customer Experience.
With Observe.AI, two of the company's analysts were able to increase their output to 197 evaluations in a single month, exceeding their monthly goal for June.
Disconnections between evaluators and team leads who delivered coaching to agents resulted in agents submitting disputes on their evaluation scores 5-10 times per month, on average. Disputes would add an additional 10+ hours to the team workload.
This meant a supervisor then had to go in and listen to an entire call to re-score it, adding 10+ hours to team workload. In the first month of going live with Observe.AI, they received zero dispute requests. That’s because agents were able to see their own call recordings and performance reports, including how they were performing compared to teammates on a leaderboard, via Observe.AI.