Search Results for author: Kei Uchiumi

Found 6 papers, 3 papers with code

Word-level Perturbation Considering Word Length and Compositional Subwords

1 code implementation Findings (ACL) 2022 Tatsuya Hiraoka, Sho Takase, Kei Uchiumi, Atsushi Keyaki, Naoaki Okazaki

We present two simple modifications for word-level perturbation: Word Replacement considering Length (WR-L) and Compositional Word Replacement (CWR). In conventional word replacement, a word in an input is replaced with a word sampled from the entire vocabulary, regardless of the length and context of the target word. WR-L considers the length of a target word by sampling words from the Poisson distribution. CWR considers the compositional candidates by restricting the source of sampling to related words that appear in subword regularization. Experimental results showed that the combination of WR-L and CWR improved the performance of text classification and machine translation.

Machine Translation text-classification +2

Joint Optimization of Tokenization and Downstream Model

2 code implementations Findings (ACL) 2021 Tatsuya Hiraoka, Sho Takase, Kei Uchiumi, Atsushi Keyaki, Naoaki Okazaki

Since traditional tokenizers are isolated from a downstream task and model, they cannot output an appropriate tokenization depending on the task and model, although recent studies imply that the appropriate tokenization improves the performance.

Machine Translation text-classification +2

How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization on Natural Text

no code implementations COLING 2020 Chihiro Shibata, Kei Uchiumi, Daichi Mochihashi

For some dimensions in the context vector, we show that their activations are highly correlated with the depth of phrase structures, such as VP and NP.

Language Modelling Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.