Thinking

Why do customers dislike chatbots? | Clarasys

Written by The Clarasys Team | June 11 2020

In recent years, front-line support has slowly been replaced or supplemented by chatbots. A user logs in to ask more about a product, or raise a support request, and they are greeted with an automated response from an AI-powered bot. In this case, the bot has knowledge at its imaginary fingertips, being built specifically to solve business-specific queries. This all sounds promising – so how come Forrester reported that 54% of US online consumers expect interactions with customer service chatbots to negatively affect their quality of life?

Expectations aren’t clear

A colleague once told me that the gap between what people think AI can do and what it can actually do is widest when discussing chatbots. With all the hype around artificial intelligence,  why shouldn’t a customer expect an immediate resolution, for the bot to know exactly who they are and what they want/need. Chatbots that make it clear what they can and what they can’t do provide a more improved customer experience; a chatbot that offers the generic: “How can I help?” encourages the imagination to run wild, however a more specific: “I can help with X or Y”, demonstrates the capability of the bot clearly.

Enhance every interaction

If a bot can be clear on its capabilities, it can therefore be transparent when it cannot answer a query. When unable to resolve the issue entirely, it is likely that the bot would need to transfer to a live agent. This is where chatbots often fall behind, usually making the customer revisit the exact same conversation they just had with the bot with the live agent. This essentially makes the bot and any interaction with it, prior to live agent communication, redundant. The chatbot should enhance the interaction, even if it can’t solve the query, by passing all the relevant information to an agent in a digestible format. Whether that is in the form of a conversation transcript, or a short summary – it means the customer doesn’t need to go back over the same ground. This goes hand in hand with admitting defeat early – if faced with an issue that the bot cannot solve, deflect it to an agent as soon as possible. If the bot keeps eliciting information, the bot needs to be clear to the user that it will not be able to resolve the issue, but rather provide it to a support agent when available.

People want human interactions

Chatbots utilise “Natural Language Processing”, a string of AI used to understand what is truly being asked, allowing the bot to ascertain what support is required. What people often neglect is “Natural Language Generation”, which takes the automated response and to make it feel conversational. This is not to say a bot should pretend to be a human – it should never do this – however, it doesn’t mean it has to be so “robotic”. Pulling a stock answer from a knowledge base does not make a customer feel valued, a one-size-fits-all fallback response like “Sorry I can’t help with that query” doesn’t help either. For example, one possible answer could be: “Sorry I can’t help with that query <user’s name>. It sounds similar to <Option X> or <Option Y> which I can help with. Please rephrase if you think I can help, or I can transfer you to an agent”. This is more personalised and can enhance the sense of individuality the customer gets from the interaction.

In summary, a customer simply wants to feel like their query is valued and have it resolved as quickly as possible. Historically, chatbots have done the opposite: a combination of poor customer-focused planning, and immature technology. New chatbot initiatives need to learn from this and put the customer at the centre of their work, rather than having other factors, like cost reduction, as their focus.

Getting chatbots right is just one element of delivering great remote customer service. To read more about other factors to consider, click here.

This post was originally written by Mike Beech.