no code implementations • 15 Dec 2020 • Subendhu Rongali, Beiye Liu, Liwei Cai, Konstantine Arkoudas, Chengwei Su, Wael Hamza
Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain.
Ranked #3 on Spoken Language Understanding on Snips-SmartLights
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • ACL 2020 • Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya D. McCarthy, Katharina Kann
We propose the task of unsupervised morphological paradigm completion.
3 code implementations • NAACL 2018 • Liwei Cai, William Yang Wang
This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks.
Ranked #24 on Link Prediction on WN18