Paper

Measuring Conversational Fluidity in Automated Dialogue Agents

We present an automated evaluation method to measure fluidity in conversational dialogue systems. The method combines various state of the art Natural Language tools into a classifier, and human ratings on these dialogues to train an automated judgment model. Our experiments show that the results are an improvement on existing metrics for measuring fluidity.

Results in Papers With Code
(↓ scroll down to see all results)