1 code implementation • EACL 2021 • Lee-Hsun Hsieh, Yang-Yin Lee, Ee-Peng Lim
Pretrained using large amount of data, autoregressive language models are able to generate high quality sequences.
no code implementations • LREC 2020 • Ting-Yu Yen, Yang-Yin Lee, Yow-Ting Shiue, Hen-Hsen Huang, Hsin-Hsi Chen
However, most of these datasets are not designed for evaluating sense embeddings.
1 code implementation • COLING 2018 • Yang-Yin Lee, Ting-Yu Yen, Hen-Hsen Huang, Yow-Ting Shiue, Hsin-Hsi Chen
In the experiment, we show that the generalized model can outperform previous approaches in three types of experiment: semantic relatedness, contextual word similarity and semantic difference.