What process to follow for conversation review?

Manager reviewing agents tickets

In the most traditional approach to ticket reviews, the CS Manager or a Team Lead is responsible for reviewing agents work and giving them feedback. In bigger teams, it might make sense to have a separate QA Agent, whose job it is to review/rate tickets and give agents feedback on their work.

This works well for companies that have structured teams and a hierarchical setup. On the positive side, this creates a fluid workflow and the same people are reviewing everyone’s work creating consistency in the feedback given and allowing for comparison.

This is more difficult to make work for smaller teams, where manager already has their plate very full. From the agent perspective, it reinforces the hierarchy and they only have one person to receive feedback from, good as it’s consistent, not so good as there is little variety.

 

Peer to peer reviews

More and more companies are going down the route of having their agents view other agents work. And this could work brilliantly well for smaller teams and companies embracing an open culture. Agents get to learn from others by seeing their responses to the same issues they are themselves handling every day as well as share their own tips and experience of what has worked well and what has not. In addition, getting feedback from different people can give the review process a whole new perspective. Moreover, having more people participate in the process helps cover a lot more ground in terms of total tickets reviewed. The other added benefit is that it creates a more collaborative culture where agents are not competing with each other, but helping each other improve.

It’s definitely more difficult to compare agent performance when a variety of people are doing the reviews and it requires some effort to train everybody on ticket reviewing (and tracking whether it’s being done), but those efforts could pay off manyfold.

 

Self-review

Why not have agents review their own tickets and think through their own responses once again - critically evaluating different aspects of their job.

You’re spending a  good amount of time on hiring and finding the right people. So it makes sense to trust them to review and critically evaluate their work.

 

Reactive review

With a massive number of tickets running through your support team’s hands, it might make sense to concentrate the feedback efforts to cases, where you know something has gone wrong. Be it a poor CSAT rating, a ticket with lots of back and forth messages or tickets with very long response times. It will surely add a certain bias to your internal quality score, but it’s probably the fastest way to see the positive effect ticket review can have on your bottom line results.

It’s important not to mix reactive and proactive (where tickets for review will be chosen randomly) ticket review results since reactive review will surely have much lower scores and it won’t be comparable to proactive ticket review scores.


Did this help answer your question?

thumbs up
thumbs down

Thanks for the feedback! 🙏🏽


Help by drift