Search Results for author: Kosuke Nishida

Found 12 papers, 1 papers with code

Robust Text-driven Image Editing Method that Adaptively Explores Directions in Latent Spaces of StyleGAN and CLIP

no code implementations3 Apr 2023 Tsuyoshi Baba, Kosuke Nishida, Kyosuke Nishida

Our model represents the edit direction as a normal vector in the CLIP space obtained by training a SVM to classify positive and negative images.

SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images

1 code implementation12 Jan 2023 Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito

Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.

Evidence Selection Question Answering +1

Self-Adaptive Named Entity Recognition by Retrieving Unstructured Knowledge

no code implementations14 Oct 2022 Kosuke Nishida, Naoki Yoshinaga, Kyosuke Nishida

Although named entity recognition (NER) helps us to extract domain-specific entities from text (e. g., artists in the music domain), it is costly to create a large amount of training data or a structured knowledge base to perform accurate NER in the target domain.

named-entity-recognition Named Entity Recognition +1

Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions

no code implementations Findings (NAACL) 2022 Kosuke Nishida, Kyosuke Nishida, Shuichi Nishioka

Our proposed model, LIDE (Learning from Image and DEscription), has a text decoder to generate the descriptions and a text encoder to obtain the text representations of machine- or user-generated descriptions.

Few-Shot Image Classification

Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction

no code implementations17 Nov 2021 Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Sen Yoshida

In this study, we define an interpretable reading comprehension (IRC) model as a pipeline model with the capability of predicting unanswerable queries.

Reading Comprehension

Task-adaptive Pre-training of Language Models with Word Embedding Regularization

no code implementations Findings (ACL) 2021 Kosuke Nishida, Kyosuke Nishida, Sen Yoshida

TAPTER runs additional pre-training by making the static word embeddings of a PTLM close to the word embeddings obtained in the target domain with fastText.

Domain Adaptation Question Answering +1

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

no code implementations29 Mar 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita

Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.

Abstractive Text Summarization Text Generation

Length-controllable Abstractive Summarization by Guiding with Summary Prototype

no code implementations21 Jan 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto

Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.

Abstractive Text Summarization

Multi-style Generative Reading Comprehension

no code implementations ACL 2019 Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita

Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.

Abstractive Text Summarization Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.