Paper

Reinterpreting CTC training as iterative fitting

The connectionist temporal classification (CTC) enables end-to-end sequence learning by maximizing the probability of correctly recognizing sequences during training. The outputs of a CTC-trained model tend to form a series of spikes separated by strongly predicted blanks, know as the spiky problem. To figure out the reason for it, we reinterpret the CTC training process as an iterative fitting task that is based on frame-wise cross-entropy loss. It offers us an intuitive way to compare target probabilities with model outputs for each iteration, and explain how the model outputs gradually turns spiky. Inspired by it, we put forward two ways to modify the CTC training. The experiments demonstrate that our method can well solve the spiky problem and moreover, lead to faster convergence over various training settings. Beside this, the reinterpretation of CTC, as a brand new perspective, may be potentially useful in other situations. The code is publicly available at https://github.com/hzli-ucas/caffe/tree/ctc.

Results in Papers With Code
(↓ scroll down to see all results)