no code implementations • 18 Jan 2024 • Yuekun Yao, Alexander Koller
Compositional generalization, the ability to predict complex meanings from training on simpler sentences, poses challenges for powerful pretrained seq2seq models.
no code implementations • 15 Nov 2023 • Yuekun Yao, Alexander Koller
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
1 code implementation • 23 Oct 2023 • Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim
The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions.
1 code implementation • 24 Oct 2022 • Yuekun Yao, Alexander Koller
Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks, including ones that require predicting linguistic structure.
no code implementations • 24 Feb 2022 • Pia Weißenhorn, Yuekun Yao, Lucia Donatelli, Alexander Koller
A rapidly growing body of research on compositional generalization investigates the ability of a semantic parser to dynamically recombine linguistic elements seen in training into unseen sequences.
no code implementations • WS 2020 • Dominik Macháček, Jonáš Kratochvíl, Sangeet Sagar, Matúš Žilinec, Ondřej Bojar, Thai-Son Nguyen, Felix Schneider, Philip Williams, Yuekun Yao
This paper is an ELITR system submission for the non-native speech translation task at IWSLT 2020.
no code implementations • 30 May 2020 • Yuekun Yao, Barry Haddow
For spoken language translation (SLT) in live scenarios such as conferences, lectures and meetings, it is desirable to show the translation to the user as quickly as possible, avoiding an annoying lag between speaker and translated captions.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3