Search Results for author: Masashi Toyoda

Found 20 papers, 4 papers with code

Fine-grained Typing of Emerging Entities in Microblogs

no code implementations Findings (EMNLP) 2021 Satoshi Akasaki, Naoki Yoshinaga, Masashi Toyoda

Experiments on the Twitter datasets confirm the effectiveness of our typing model and the context selector.

Entity Typing

Speculative Sampling in Variational Autoencoders for Dialogue Response Generation

1 code implementation Findings (EMNLP) 2021 Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Our method chooses the most probable one from redundantly sampled latent variables for tying up the variable with a given response.

Response Generation

Exploratory Model Analysis Using Data-Driven Neuron Representations

no code implementations EMNLP (BlackboxNLP) 2021 Daisuke Oba, Naoki Yoshinaga, Masashi Toyoda

Probing classifiers have been extensively used to inspect whether a model component captures specific linguistic phenomena.

Rethinking Response Evaluation from Interlocutor's Eye for Open-Domain Dialogue Systems

no code implementations4 Jan 2024 Yuma Tsuta, Naoki Yoshinaga, Shoetsu Sato, Masashi Toyoda

Open-domain dialogue systems have started to engage in continuous conversations with humans.

Early Discovery of Disappearing Entities in Microblogs

no code implementations13 Oct 2022 Satoshi Akasaki, Naoki Yoshinaga, Masashi Toyoda

The major challenge is detecting uncertain contexts of disappearing entities from noisy microblog posts.

Time Series Time Series Analysis +1

Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

uBLEU: Uncertainty-Aware Automatic Evaluation Method for Open-Domain Dialogue Systems

no code implementations ACL 2020 Tsuta Yuma, Naoki Yoshinaga, Masashi Toyoda

Experimental results on massive Twitter data confirmed that υBLEU is comparable to ΔBLEU in terms of its correlation with human judgment and that the state of the art automatic evaluation method, RUBER, is improved by integrating υBLEU.

Vocabulary Adaptation for Distant Domain Adaptation in Neural Machine Translation

no code implementations30 Apr 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

Early Discovery of Emerging Entities in Microblogs

no code implementations8 Jul 2019 Satoshi Akasaki, Naoki Yoshinaga, Masashi Toyoda

Keeping up to date on emerging entities that appear every day is indispensable for various applications, such as social-trend analysis and marketing research.

Marketing

Learning to Describe Unknown Phrases with Local and Global Contexts

no code implementations NAACL 2019 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Modeling Personal Biases in Language Use by Inducing Personalized Word Embeddings

no code implementations NAACL 2019 Daisuke Oba, Naoki Yoshinaga, Shoetsu Sato, Satoshi Akasaki, Masashi Toyoda

In this study, we propose a method of modeling such personal biases in word meanings (hereafter, semantic variations) with personalized word embeddings obtained by solving a task on subjective text while regarding words used by different individuals as different words.

Multi-class Classification Multi-Task Learning +2

Learning to Describe Phrases with Local and Global Contexts

1 code implementation1 Nov 2018 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.