no code implementations • RANLP 2021 • Satoshi Hiai, Kazutaka Shimada, Taiki Watanabe, Akiva Miura, Tomoya Iwakura
In addition, our method shows approximately three times faster extraction speed than the BERT-based models on the ChemProt corpus and reduces the memory size to one sixth of the BERT ones.
no code implementations • WS 2016 • Raphael Shu, Akiva Miura
To enhance Neural Machine Translation models, several obvious ways such as enlarging the hidden size of recurrent layers and stacking multiple layers of RNN can be considered.