Search Results for author: Ryo Ishii

Found 7 papers, 1 papers with code

A Comparison of Praising Skills in Face-to-Face and Remote Dialogues

no code implementations LREC 2022 Toshiki Onishi, Asahi Ogushi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata

In this paper, we analyze the differences between the face-to-face and remote corpuses, in particular the expressions in adjudged praising scenes in both corpuses, and also evaluated praising skills.

Continual Learning for Personalized Co-speech Gesture Generation

no code implementations ICCV 2023 Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency

However in practical scenarios, speaker data comes sequentially and in small amounts as the agent personalizes with more speakers, akin to a continual learning paradigm.

Continual Learning Gesture Generation

Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data

no code implementations ACL 2021 Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency

Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood.

Privacy Preserving

Neural Dialogue Context Online End-of-Turn Detection

no code implementations WS 2018 Ryo Masumura, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Ryuichiro Higashinaka, Yushi Aono

This paper proposes a fully neural network based dialogue-context online end-of-turn detection method that can utilize long-range interactive information extracted from both speaker{'}s utterances and collocutor{'}s utterances.

Action Detection Spoken Dialogue Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.