This is Part 2 of a series – Part 1 can be found here
So…part 2 of this series will take a look at the KPIs appropriate for a Technical Support department. In order to arrive at what elements of performance are both measurable and key to the department, we have to investigate what the role of a Tech Support department is. I posit that the role of Technical Support is this:
To be the first layer of customer care, responding to and assisting the customer with issues surrounding the use of the product or service offered by the company.
What this means is that when the customer isn’t yet a customer, when they’re just a prospect, the first person(s) they encounter are the salespeople. It is at that point up to the salesperson to put the company’s best foot forward and present the product in its best light as a solution to the appropriate needs or desires of the customer. Once they are a customer, if any problem is experienced with the product of a technical nature, TS is the first line of assistance – often the only one. It is often the only contact the customer has with the company, outside of mass-market advertising.
Knowing this about the role this department plays, let’s go back to the question of what is a KPI – a Key Performance Indicator.
What is Key about this department? What is the reason it exists? I will submit that the answer to this is that it helps customers use the company’s products successfully. As well, it maintains a healthy customer relationship. Some might argue that its cost is key – but cost is not correlated to success: cost could be high or low, but does that impact how well it accomplishes its job? Cost is important, and absolutely should be tracked and measured, but it is not key.
Let’s start with “helping a customer use the company’s products successfully,” and turn that into metrics, something we can measure. I’m going to work backwards here – successfully sticks out in this sentence as a major point. That means to me that we come up with a measure of successful resolutions to customer issues. Either workarounds or fixes, as long as the customer has an answer and considers the issue resolved, that’s a successful resolution. So immediately there’s a metric:
- The ratio of successful resolutions against total issue count
I submit that that is Key, it directly reports Performance, and it is a measurable Indicator.
Next I want to tap that other role – “maintains a healthy customer relationship.” Healthy in business terms generally means that the customer will speak well of the company to others, and will be comfortable with the idea of spending money with the company when the need arises. This one is harder to measure, and some companies might not do it, but the way to measure this is to ask the customer periodically, or immediately after the resolution of the service episode. Probably a small fraction of customers will reply to a request for such an evaluation, so keep the questions short and don’t spend a lot of the customer’s time. It may also be worthwhile to offer some form of incentive to respondents (think small: a $5 coupon at Amazon, or a free song download from iTunes, something unrelated to the company’s products – you’re wanting answers from dissatisfied customers as well as satisfied ones, so they may not give a hoot about discounts on your products).
The question you’re trying to answer is: would you, the customer, be comfortable doing business with us, the company, if your need should arise again; and would you be comfortable recommending us to a friend or colleague experiencing that same need? I’ll distill this to a more recognizable term that almost everyone will recognize:
- The ratio of satisfied customers among total customers experiencing issues
That is a quality that is Key, it directly reports Performance, and it is a measurable Indicator.
I’ll hit some other points here which are also extremely important, and perhaps to some businesses can be considered key (for example, if you run a company or department that handles outsourced tech support calls and you’ve got three hundred staffers manning a phone bank, some of these will definitely be key).
- Numer of Issues Awaiting Resolution
This is usually referred to as “Calls in Queue”, but I’m trying to future-proof my writing here ). This represents the absolute number of issues that are in queue awaiting their first contact with a member of your team. The medium could be anything – telephone, live chat, email, snail-mail, whatever. This is a measurable indicator telling you what your backlog is. It will also be used to give you data points of absolute count of issues per product over time (which, if the foundation of the product is relatively sound, you would expect to see go down over time). As well, the ratio of this number against 1st-tier support staff will indicate whether staffing levels are appropriate for the volume and duration of issues (too many issues indicates understaffing, too few indicates overstaffing).
- Wait time in Queue
This will differ for each medium, so it’s not easily measured – it won’t be key unless your department is responsible for only one method of input. It is important though. It represents the amount of time between the arrival of the customer’s issue and the moment when it comes in contact with a 1st-tier staff member. Combined with number of issues in queue, it gives you an indicator of whether your staffing levels are appropriate for the volume of issues. It directly impacts one of the two KPIs I mentioned above, because the amount of time between arrival and contact affects the customer relationship.
- Issue Duration
This represents how long it takes from either arrival of the issue or its first contact, to the point where the issue is resolved and the customer receives the resolution. (I specifically say “receives,” rather than “accepts,” because it should be recognized that some customers simply never accept resolution outside of unreasonable circumstances.) Where the number of issues and the issue wait time are something of a balancing act with staffing and you will have upper and lower boundaries which you know should not be crossed (and those boundaries will shift regularly or be different for each product), this is a one-way metric: the shorter the better. A certain level of “acceptable” should be established, which will only rarely be changed (as new techniques of resolution are adopted, for example). This measurement indicates how capable your team is at providing the service that is their charter. If things take too long, your staff need training. There isn’t any such thing as too short – though you should keep an eye on short and verify that resolution is genuine, and not staffers closing down issues in an illegitimate manner.
Other factors that can be measured for valuable data – but which are not necessarily indicative of departmental health – would include:
- Issue aging – addressing “cold case” issues, ones that have not had successful resolution and remain open a long time
- If issues derive directly from product defects, a really good team would tie their issues to specific feature- or change-requests in the product. When the product changes in such a way that these issues are resolved, having a message sent to those customers indicating the fix would be excellent customer service.
- Product-specific issue counts – these can be given to product management for determination of whether development time should be spent on fixes or features; they can also be used to measure the overall quality of a product.
- Ratio of escalated issues among total issue count – what % of issues require attention beyond first tier or knowledgebase? This can be used to indicate whether 1st-tier staff need training, or on an individual level can indicate when a staff member is suitable for promotion.
So that’s my two bits on tech support – I’ll do another article on general IT later (this time I won’t promise in a few days though, since things are a bit distracting right now).
I’d appreciate your thoughts on the matter!
Pingback: Key Performance Indicators in IT – part 1 | Borked Code