no code implementations • WS 2020 • Jind{\v{r}}ich Libovick{\'y}, Zden{\v{e}}k Kasner, Jind{\v{r}}ich Helcl, Ond{\v{r}}ej Du{\v{s}}ek
While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality.
no code implementations • WS 2018 • Jind{\v{r}}ich Helcl, Jind{\v{r}}ich Libovick{\'y}, Du{\v{s}}an Vari{\v{s}}
For our submission, we acquired both textual and multimodal additional data.
no code implementations • EMNLP 2018 • Jind{\v{r}}ich Libovick{\'y}, Jind{\v{r}}ich Helcl
Autoregressive decoding is the only part of sequence-to-sequence models that prevents them from massive parallelization at inference time.
no code implementations • WS 2018 • Jind{\v{r}}ich Libovick{\'y}, Jind{\v{r}}ich Helcl, David Mare{\v{c}}ek
In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways.
no code implementations • ACL 2017 • Jind{\v{r}}ich Libovick{\'y}, Jind{\v{r}}ich Helcl
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities.
no code implementations • LREC 2016 • Jind{\v{r}}ich Libovick{\'y}
Continuous word representations appeared to be a useful feature in many natural language processing tasks.