no code implementations • LREC 2022 • Toshiki Onishi, Asahi Ogushi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata
In this paper, we analyze the differences between the face-to-face and remote corpuses, in particular the expressions in adjudged praising scenes in both corpuses, and also evaluated praising skills.
no code implementations • ICCV 2023 • Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency
However in practical scenarios, speaker data comes sequentially and in small amounts as the agent personalizes with more speakers, akin to a continual learning paradigm.
no code implementations • ACL 2021 • Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood.
no code implementations • 4 Dec 2020 • Terrance Liu, Paul Pu Liang, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas Allen, Louis-Philippe Morency
Mental health conditions remain under-diagnosed even in countries with common access to advanced medical care.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, Louis-Philippe Morency
We study relationships between spoken language and co-speech gestures in context of two key challenges.
no code implementations • WS 2018 • Ryo Masumura, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Ryuichiro Higashinaka, Yushi Aono
This paper proposes a fully neural network based dialogue-context online end-of-turn detection method that can utilize long-range interactive information extracted from both speaker{'}s utterances and collocutor{'}s utterances.