Search Results for author: Yuta Nishikawa

Found 3 papers, 0 papers with code

Keep Decoding Parallel with Effective Knowledge Distillation from Language Models to End-to-end Speech Recognisers

no code implementations22 Jan 2024 Michael Hentschel, Yuta Nishikawa, Tatsuya Komatsu, Yusuke Fujita

This study presents a novel approach for knowledge distillation (KD) from a BERT teacher model to an automatic speech recognition (ASR) model using intermediate layers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Inter-connection: Effective Connection between Pre-trained Encoder and Decoder for Speech Translation

no code implementations26 May 2023 Yuta Nishikawa, Satoshi Nakamura

In this study, we propose an inter-connection mechanism that aggregates the information from each layer of the speech pre-trained model by weighted sums and inputs into the decoder.

2k Decoder +1

Cannot find the paper you are looking for? You can Submit a new open access paper.