no code implementations • 3 Apr 2023 • Tsuyoshi Baba, Kosuke Nishida, Kyosuke Nishida
Our model represents the edit direction as a normal vector in the CLIP space obtained by training a SVM to classify positive and negative images.
1 code implementation • 12 Jan 2023 • Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito
Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.
no code implementations • 14 Oct 2022 • Kosuke Nishida, Naoki Yoshinaga, Kyosuke Nishida
Although named entity recognition (NER) helps us to extract domain-specific entities from text (e. g., artists in the music domain), it is costly to create a large amount of training data or a structured knowledge base to perform accurate NER in the target domain.
no code implementations • Findings (NAACL) 2022 • Kosuke Nishida, Kyosuke Nishida, Shuichi Nishioka
Our proposed model, LIDE (Learning from Image and DEscription), has a text decoder to generate the descriptions and a text encoder to obtain the text representations of machine- or user-generated descriptions.
no code implementations • 17 Nov 2021 • Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Sen Yoshida
In this study, we define an interpretable reading comprehension (IRC) model as a pipeline model with the capability of predicting unanswerable queries.
no code implementations • Findings (ACL) 2021 • Kosuke Nishida, Kyosuke Nishida, Sen Yoshida
TAPTER runs additional pre-training by making the static word embeddings of a PTLM close to the word embeddings obtained in the target domain with fastText.
no code implementations • 29 Mar 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita
Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.
no code implementations • 21 Jan 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto
Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.
no code implementations • LREC 2020 • Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Hisako Asano, Junji Tomita
The second one is the proposed model that uses a multi-task learning approach of LM and RC.
no code implementations • ACL 2019 • Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita
It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence.
Ranked #61 on Question Answering on HotpotQA
no code implementations • ACL 2019 • Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita
Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.
Ranked #1 on Question Answering on MS MARCO
no code implementations • WS 2018 • Kosuke Nishida, Kyosuke Nishida, Hisako Asano, Junji Tomita
Natural language inference (NLI) is one of the most important tasks in NLP.