1 code implementation • 24 Jan 2024 • Ryota Tanaka, Taichi Iki, Kyosuke Nishida, Kuniko Saito, Jun Suzuki
We study the problem of completing various visual document understanding (VDU) tasks, e. g., question answering and information extraction, on real-world documents through human-written instructions.
1 code implementation • 12 Jan 2023 • Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito
Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.
1 code implementation • 27 Jan 2021 • Ryota Tanaka, Kyosuke Nishida, Sen Yoshida
In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language.
Machine Reading Comprehension Natural Language Understanding +2
no code implementations • 6 May 2020 • Ryota Tanaka, Akinobu Lee
Fact-based dialogue generation is a task of generating a human-like response based on both dialogue context and factual texts.
no code implementations • 5 Feb 2019 • Ryota Tanaka, Akihide Ozeki, Shugo Kato, Akinobu Lee
This study aims to generate responses based on real-world facts by conditioning context and external facts extracted from information websites.