Improving Customer Service Chatbots with Attention-based Transfer Learning

24 Nov 2021  ·  Jordan J. Bird ·

With growing societal acceptance and increasing cost efficiency due to mass production, service robots are beginning to cross from the industrial to the social domain. Currently, customer service robots tend to be digital and emulate social interactions through on-screen text, but state-of-the-art research points towards physical robots soon providing customer service in person. This article explores two possibilities. Firstly, whether transfer learning can aid in the improvement of customer service chatbots between business domains. Second, the implementation of a framework for physical robots for in-person interaction. Modelled on social interaction with Twitter customer support accounts, transformer-based chatbot models are initially assigned to learn one domain from an initial random weight distribution. Given shared vocabulary, each model is then tasked with learning another domain by transferring knowledge from the previous. Following studies on 19 different businesses, results show that the majority of models are improved when transferring weights from at least one other domain, in particular those that are more data-scarce than others. General language transfer learning occurs, as well as higher-level transfer of similar domain knowledge, in several cases. The chatbots are finally implemented on Temi and Pepper robots, with feasibility issues encountered and solutions are proposed to overcome them.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here