Search Results for author: Zhishen Yang

Found 5 papers, 2 papers with code

SciCap+: A Knowledge Augmented Dataset to Study the Challenges of Scientific Figure Captioning

1 code implementation6 Jun 2023 Zhishen Yang, Raj Dabre, Hideki Tanaka, Naoaki Okazaki

Automating figure caption generation helps move model understandings of scientific documents beyond text and will help authors write informative captions that facilitate communicating scientific findings.

Caption Generation Image Captioning +1

TextLearner at SemEval-2020 Task 10: A Contextualized Ranking System in Solving Emphasis Selection in Text

no code implementations SEMEVAL 2020 Zhishen Yang, Lars Wolfsteller, Naoaki Okazaki

This paper describes the emphasis selection system of the team TextLearner for SemEval 2020 Task 10: Emphasis Selection For Written Text in Visual Media.

Language Modelling

Image Caption Generation for News Articles

1 code implementation COLING 2020 Zhishen Yang, Naoaki Okazaki

In this paper, we address the task of news-image captioning, which generates a description of an image given the image and its article body as input.

Caption Generation Image Captioning

Keyframe Segmentation and Positional Encoding for Video-guided Machine Translation Challenge 2020

no code implementations23 Jun 2020 Tosho Hirasawa, Zhishen Yang, Mamoru Komachi, Naoaki Okazaki

Video-guided machine translation as one of multimodal neural machine translation tasks targeting on generating high-quality text translation by tangibly engaging both video and text.

Machine Translation Translation +1

TokyoTech\_NLP at SemEval-2019 Task 3: Emotion-related Symbols in Emotion Detection

no code implementations SEMEVAL 2019 Zhishen Yang, Sam Vijlbrief, Naoaki Okazaki

This paper presents our contextual emotion detection system in approaching the SemEval2019 shared task 3: EmoContext: Contextual Emotion Detection in Text.

Cannot find the paper you are looking for? You can Submit a new open access paper.