Search Results for author: Hyung-Jeong Yang

Found 14 papers, 5 papers with code

DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement Estimation in Conversation

no code implementations31 Jul 2023 Vu Ngoc Tu, Van Thong Huynh, Hyung-Jeong Yang, M. Zaigham Zaheer, Shah Nawaz, Karthik Nandakumar, Soo-Hyung Kim

Conversational engagement estimation is posed as a regression problem, entailing the identification of the favorable attention and involvement of the participants in the conversation.

regression

A transformer-based approach to video frame-level prediction in Affective Behaviour Analysis In-the-wild

no code implementations16 Mar 2023 Dang-Khanh Nguyen, Ngoc-Huynh Ho, Sudarshan Pant, Hyung-Jeong Yang

In recent years, transformer architecture has been a dominating paradigm in many applications, including affective computing.

Emotion Classification

Generic Event Boundary Detection in Video with Pyramid Features

1 code implementation11 Jan 2023 Van Thong Huynh, Hyung-Jeong Yang, Guee-Sang Lee, Soo-Hyung Kim

In this study, we present an approach that considers the correlation between neighbor frames with pyramid feature maps in both spatial and temporal dimensions to construct a framework for localizing generic events in video.

Boundary Detection Generic Event Boundary Detection

Fine-tuning Wav2vec for Vocal-burst Emotion Recognition

no code implementations1 Oct 2022 Dang-Khanh Nguyen, Sudarshan Pant, Ngoc-Huynh Ho, Guee-Sang Lee, Soo-Huyng Kim, Hyung-Jeong Yang

The ACII Affective Vocal Bursts (A-VB) competition introduces a new topic in affective computing, which is understanding emotional expression using the non-verbal sound of humans.

Emotion Recognition

An Ensemble Approach for Multiple Emotion Descriptors Estimation Using Multi-task Learning

1 code implementation22 Jul 2022 Irfan Haider, Minh-Trieu Tran, Soo-Hyung Kim, Hyung-Jeong Yang, Guee-Sang Lee

This paper illustrates our submission method to the fourth Affective Behavior Analysis in-the-Wild (ABAW) Competition.

Multi-Task Learning

Affective Behavior Analysis using Action Unit Relation Graph and Multi-task Cross Attention

no code implementations21 Jul 2022 Dang-Khanh Nguyen, Sudarshan Pant, Ngoc-Huynh Ho, Guee-Sang Lee, Soo-Huyng Kim, Hyung-Jeong Yang

In this paper, we present our solution and experiment result for the Multi-Task Learning challenge of the Affective Behavior Analysis in-the-wild competition.

Action Unit Detection Arousal Estimation +5

Emotion Recognition with Incomplete Labels Using Modified Multi-task Learning Technique

no code implementations9 Jul 2021 Phan Tran Dac Thinh, Hoang Manh Hung, Hyung-Jeong Yang, Soo-Hyung Kim, Guee-Sang Lee

In this study, we propose a method that utilizes the association between seven basic emotions and twelve action units from the AffWild2 dataset.

Emotion Recognition Multi-Task Learning

Temporal Convolution Networks with Positional Encoding for Evoked Expression Estimation

1 code implementation16 Jun 2021 VanThong Huynh, Guee-Sang Lee, Hyung-Jeong Yang, Soo-Huyng Kim

This paper presents an approach for Evoked Expressions from Videos (EEV) challenge, which aims to predict evoked facial expressions from video.

Variants of BERT, Random Forests and SVM approach for Multimodal Emotion-Target Sub-challenge

no code implementations28 Jul 2020 Hoang Manh Hung, Hyung-Jeong Yang, Soo-Hyung Kim, Guee-Sang Lee

Emotion recognition has become a major problem in computer vision in recent years that made a lot of effort by researchers to overcome the difficulties in this task.

Classification Emotion Recognition +3

Eye Semantic Segmentation with a Lightweight Model

1 code implementation4 Nov 2019 Van Thong Huynh, Soo-Hyung Kim, Guee-Sang Lee, Hyung-Jeong Yang

In this paper, we present a multi-class eye segmentation method that can run the hardware limitations for real-time inference.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.