no code implementations • *SEM (NAACL) 2022 • Ryosuke Takahashi, Ryohei Sasano, Koichi Takeda
Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification.
1 code implementation • 13 Apr 2024 • Hayato Tsukagoshi, Tsutomu Hirao, Makoto Morishita, Katsuki Chousa, Ryohei Sasano, Koichi Takeda
The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).
no code implementations • 1 Apr 2024 • Kotaro Aono, Ryohei Sasano, Koichi Takeda
There are several linguistic claims about situations where words are more likely to be used as metaphors.
no code implementations • 23 Feb 2024 • Soma Sato, Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing.
1 code implementation • 30 Oct 2023 • Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
We report the development of Japanese SimCSE, Japanese sentence embedding models fine-tuned with SimCSE.
no code implementations • 25 Oct 2023 • Masashi Oshika, Kosuke Yamada, Ryohei Sasano, Koichi Takeda
It has been known to be difficult to generate adequate sports updates from a sequence of vast amounts of diverse live tweets, although the live sports viewing experience with tweets is gaining the popularity.
no code implementations • 23 May 2023 • Kosuke Yamada, Ryohei Sasano, Koichi Takeda
The semantic frame induction tasks are defined as a clustering of words into the frames that they evoke, and a clustering of their arguments according to the frame element roles that they should fill.
no code implementations • 22 May 2023 • Shohei Yoda, Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
Recent progress in sentence embedding, which represents the meaning of a sentence as a point in a vector space, has achieved high performance on tasks such as a semantic textual similarity (STS) task.
no code implementations • 27 Apr 2023 • Kosuke Yamada, Ryohei Sasano, Koichi Takeda
Recent studies have demonstrated the usefulness of contextualized word embeddings in unsupervised semantic frame induction.
no code implementations • 14 Dec 2022 • Hongkuan Zhang, Saku Sugawara, Akiko Aizawa, Lei Zhou, Ryohei Sasano, Koichi Takeda
Moreover, the higher model performance on difficult examples and unseen data also demonstrates the generalization ability.
no code implementations • *SEM (NAACL) 2022 • Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
There have been many successful applications of sentence embedding methods.
1 code implementation • EMNLP 2021 • Kosuke Yamada, Yuta Hitomi, Hideaki Tamori, Ryohei Sasano, Naoaki Okazaki, Kentaro Inui, Koichi Takeda
We also consider a new headline generation strategy that takes advantage of the controllable generation order of Transformer.
no code implementations • Findings (ACL) 2021 • Kosuke Yamada, Ryohei Sasano, Koichi Takeda
Furthermore, we examine the extent to which the contextualized representation of a verb can estimate the number of frames that the verb can evoke.
no code implementations • ACL 2021 • Kosuke Yamada, Ryohei Sasano, Koichi Takeda
Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings.
1 code implementation • ACL 2021 • Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda
However, these methods are only available for limited languages due to relying heavily on the large NLI datasets.
no code implementations • ACL (IWSLT) 2021 • Lei Zhou, Liang Ding, Kevin Duh, Shinji Watanabe, Ryohei Sasano, Koichi Takeda
In the field of machine learning, the well-trained model is assumed to be able to recover the training labels, i. e. the synthetic labels predicted by the model should be as close to the ground-truth labels as possible.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Kosuke Yamada, Tsutomu Hirao, Ryohei Sasano, Koichi Takeda, Masaaki Nagata
Dividing biomedical abstracts into several segments with rhetorical roles is essential for supporting researchers{'} information access in the biomedical domain.
no code implementations • WMT (EMNLP) 2020 • Lei Zhou, Liang Ding, Koichi Takeda
In response to this issue, we propose to expose explicit cross-lingual patterns, \textit{e. g.} word alignments and generation score, to our proposed zero-shot models.
no code implementations • LREC 2020 • Hongkuan Zhang, Ryohei Sasano, Koichi Takeda, Zoie Shui-Yee Wong
In this paper, we present our annotation scheme with respect to the definition of medication entities that we take into account, the method to annotate the relations between entities, and the details of the intention and factuality annotation.
no code implementations • ACL 2019 • Kosuke Yamada, Ryohei Sasano, Koichi Takeda
Our experiments on the personality prediction of Twitter users show that the textual information of user behaviors is more useful than the co-occurrence information of the user behaviors.