Search Results for author: Yotaro Kubo

Found 3 papers, 0 papers with code

Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers

no code implementations16 Feb 2022 Yotaro Kubo, Shigeki Karita, Michiel Bacchiani

Since embedding vectors can be assumed as implicit representations of linguistic information such as part-of-speech, intent, and so on, those are also expected to be useful modeling cues for ASR decoders.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition

no code implementations9 Jun 2021 Shigeki Karita, Yotaro Kubo, Michiel Adriaan Unico Bacchiani, Llion Jones

End-to-end (E2E) modeling is advantageous for automatic speech recognition (ASR) especially for Japanese since word-based tokenization of Japanese is not trivial, and E2E modeling is able to model character sequences directly.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Compacting Neural Network Classifiers via Dropout Training

no code implementations18 Nov 2016 Yotaro Kubo, George Tucker, Simon Wiesler

We introduce dropout compaction, a novel method for training feed-forward neural networks which realizes the performance gains of training a large model with dropout regularization, yet extracts a compact neural network for run-time efficiency.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.