Joint Incremental Disfluency Detection and Dependency Parsing

TACL 2014  ·  Matthew Honnibal, Mark Johnson ·

We present an incremental dependency parsing model that jointly performs disfluency detection. The model handles speech repairs using a novel non-monotonic transition system, and includes several novel classes of features. For comparison, we evaluated two pipeline systems, using state-of-the-art disfluency detectors. The joint model performed better on both tasks, with a parse accuracy of 90.5{\%} and 84.0{\%} accuracy at disfluency detection. The model runs in expected linear time, and processes over 550 tokens a second.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here