no code implementations • LANTERN (COLING) 2020 • Diana Galvan-Sosa, Jun Suzuki, Kyosuke Nishida, Koji Matsuda, Kentaro Inui
Despite recent achievements in natural language understanding, reasoning over commonsense knowledge still represents a big challenge to AI systems.
no code implementations • ACL 2022 • Shumpei Miyawaki, Taku Hasegawa, Kyosuke Nishida, Takuma Kato, Jun Suzuki
We tackle the tasks of image and text retrieval using a dual-encoder model in which images and text are encoded independently.
1 code implementation • 24 Jan 2024 • Ryota Tanaka, Taichi Iki, Kyosuke Nishida, Kuniko Saito, Jun Suzuki
We study the problem of completing various visual document understanding (VDU) tasks, e. g., question answering and information extraction, on real-world documents through human-written instructions.
no code implementations • 3 Apr 2023 • Tsuyoshi Baba, Kosuke Nishida, Kyosuke Nishida
Our model represents the edit direction as a normal vector in the CLIP space obtained by training a SVM to classify positive and negative images.
1 code implementation • 12 Jan 2023 • Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito
Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.
no code implementations • 14 Oct 2022 • Kosuke Nishida, Naoki Yoshinaga, Kyosuke Nishida
Although named entity recognition (NER) helps us to extract domain-specific entities from text (e. g., artists in the music domain), it is costly to create a large amount of training data or a structured knowledge base to perform accurate NER in the target domain.
no code implementations • Findings (NAACL) 2022 • Kosuke Nishida, Kyosuke Nishida, Shuichi Nishioka
Our proposed model, LIDE (Learning from Image and DEscription), has a text decoder to generate the descriptions and a text encoder to obtain the text representations of machine- or user-generated descriptions.
no code implementations • 17 Nov 2021 • Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Sen Yoshida
In this study, we define an interpretable reading comprehension (IRC) model as a pipeline model with the capability of predicting unanswerable queries.
no code implementations • Findings (ACL) 2021 • Kosuke Nishida, Kyosuke Nishida, Sen Yoshida
TAPTER runs additional pre-training by making the static word embeddings of a PTLM close to the word embeddings obtained in the target domain with fastText.
1 code implementation • 27 Jan 2021 • Ryota Tanaka, Kyosuke Nishida, Sen Yoshida
In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language.
Machine Reading Comprehension Natural Language Understanding +2
no code implementations • 1 Jul 2020 • Yuma Koizumi, Ryo Masumura, Kyosuke Nishida, Masahiro Yasuda, Shoichiro Saito
TRACKE estimates keywords, which comprise a word set corresponding to audio events/scenes in the input audio, and generates the caption while referring to the estimated keywords to reduce word-selection indeterminacy.
no code implementations • 29 Mar 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita
Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.
no code implementations • 21 Jan 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto
Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.
no code implementations • LREC 2020 • Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Hisako Asano, Junji Tomita
The second one is the proposed model that uses a multi-task learning approach of LM and RC.
no code implementations • WS 2019 • Yasuhito Ohsugi, Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
Conversational machine comprehension (CMC) requires understanding the context of multi-turn dialogue.
no code implementations • ACL 2019 • Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita
It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence.
Ranked #61 on Question Answering on HotpotQA
no code implementations • ACL 2019 • Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita
Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.
Ranked #1 on Question Answering on MS MARCO
no code implementations • CONLL 2018 • Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
To improve the accuracy of CKB completion and expand the size of CKBs, we formulate a new commonsense knowledge base generation task (CKB generation) and propose a joint learning method that incorporates both CKB completion and CKB generation.
no code implementations • 31 Aug 2018 • Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, Junji Tomita
Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages.
no code implementations • WS 2018 • Kosuke Nishida, Kyosuke Nishida, Hisako Asano, Junji Tomita
Natural language inference (NLI) is one of the most important tasks in NLP.
no code implementations • IJCNLP 2017 • Itsumi Saito, Jun Suzuki, Kyosuke Nishida, Kugatsu Sadamitsu, Satoshi Kobashikawa, Ryo Masumura, Yuji Matsumoto, Junji Tomita
In this study, we investigated the effectiveness of augmented data for encoder-decoder-based neural normalization models.
no code implementations • IJCNLP 2017 • Ryo Masumura, Taichi Asami, Hirokazu Masataki, Kugatsu Sadamitsu, Kyosuke Nishida, Ryuichiro Higashinaka
In addition, this paper reveals relationships between hyperspherical QLMs and conventional QLMs.
no code implementations • IJCNLP 2017 • Itsumi Saito, Kyosuke Nishida, Kugatsu Sadamitsu, Kuniko Saito, Junji Tomita
Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing.
no code implementations • WS 2016 • Yukinori Homma, Kugatsu Sadamitsu, Kyosuke Nishida, Ryuichiro Higashinaka, Hisako Asano, Yoshihiro Matsuo
This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents.
1 code implementation • Graduate School of Information Science and Technology, Hokkaido University 2008 • Kyosuke Nishida
When concept drift is detected, the online classifier is reinitialized to prepare for the learning of the next concept.