1 code implementation • 19 Sep 2023 • Surbhi Madan, Rishabh Jain, Gulshan Sharma, Ramanathan Subramanian, Abhinav Dhall
Bodily behavioral language is an important social cue, and its automated analysis helps in enhancing the understanding of artificial intelligence systems.
1 code implementation • 4 Aug 2023 • Ravikiran Parameshwara, Ibrahim Radwan, Akshay Asthana, Iman Abbasnejad, Ramanathan Subramanian, Roland Goecke
Whilst deep learning techniques have achieved excellent emotion prediction, they still require large amounts of labelled training data, which are (a) onerous and tedious to compile, and (b) prone to errors and biases.
no code implementations • 23 Jul 2023 • Monika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan Subramanian, Roland Goecke
While depression has been studied via multimodal non-verbal behavioural cues, head motion behaviour has not received much attention as a biomarker.
no code implementations • 12 Jun 2023 • Soujanya Narayana, Ibrahim Radwan, Ravikiran Parameshwara, Iman Abbasnejad, Akshay Asthana, Ramanathan Subramanian, Roland Goecke
Whilst a majority of affective computing research focuses on inferring emotions, examining mood or understanding the \textit{mood-emotion interplay} has received significantly less attention.
no code implementations • 20 Feb 2023 • Surbhi Madan, Monika Gahalawat, Tanaya Guha, Roland Goecke, Ramanathan Subramanian
We explore the efficacy of multimodal behavioral cues for explainable prediction of personality and interview-specific traits.
no code implementations • 21 Feb 2022 • Devika K, Venkata Ramana Murthy Oruganti, Dwarikanath Mahapatra, Ramanathan Subramanian
Among other findings, metrics employed for model training as well as reconstruction loss computation impact detection performance, and the coronal modality is found to best encode structural information for ASD detection.
1 code implementation • 21 Feb 2022 • Ravikiran Parameshwara, Soujanya Narayana, Murugappan Murugappan, Ramanathan Subramanian, Ibrahim Radwan, Roland Goecke
Employing traditional machine learning and deep learning methods, we explore (a) dimensional and categorical emotion recognition, and (b) PD vs HC classification from emotional EEG signals.
no code implementations • 15 Dec 2021 • Surbhi Madan, Monika Gahalawat, Tanaya Guha, Ramanathan Subramanian
We demonstrate the utility of elementary head-motion units termed kinemes for behavioral analytics to predict personality and interview traits.
no code implementations • 9 Jan 2021 • Vineet Mehta, Parul Gupta, Ramanathan Subramanian, Abhinav Dhall
This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media.
1 code implementation • 11 Dec 2020 • Samyak Jain, Pradeep Yarlagadda, Shreyank Jyoti, Shyamgopal Karthik, Ramanathan Subramanian, Vineet Gandhi
We also explore a variation of ViNet architecture by augmenting audio features into the decoder.
no code implementations • 22 Jun 2020 • Harshit Malik, Hersh Dhillon, Roland Goecke, Ramanathan Subramanian
Modeling hirability as a discrete/continuous variable with the \emph{big-five} personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP).
no code implementations • 12 Jun 2020 • Parul Gupta, Komal Chugh, Abhinav Dhall, Ramanathan Subramanian
We present \textbf{FakeET}-- an eye-tracking database to understand human visual perception of \emph{deepfake} videos.
1 code implementation • 29 May 2020 • Komal Chugh, Parul Gupta, Abhinav Dhall, Ramanathan Subramanian
MDS is computed as an aggregate of dissimilarity scores between audio and visual segments in a video.
no code implementations • 3 Apr 2019 • Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Mohan Kankanhalli, Stefan Winkler, Ramanathan Subramanian
Advertisements (ads) often contain strong affective content to capture viewer attention and convey an effective message to the audience.
no code implementations • 14 Aug 2018 • Abhinav Shukla, Harish Katti, Mohan Kankanhalli, Ramanathan Subramanian
Contrary to the popular notion that ad affect hinges on the narrative and the clever use of linguistic and social cues, we find that actively attended objects and the coarse scene structure better encode affective information as compared to individual scene objects or conspicuous background elements.
no code implementations • 21 Dec 2017 • Viral Parekh, Pin Sym Foong, Shendong Zhao, Ramanathan Subramanian
Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable.
no code implementations • 7 Nov 2017 • Viral Parekh, Ramanathan Subramanian, Dipanjan Roy, C. V. Jawahar
The success of deep learning in computer vision has greatly increased the need for annotated image datasets.
no code implementations • 30 May 2016 • Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy
Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.
no code implementations • ICCV 2015 • Elisa Ricci, Jagannadan Varadarajan, Ramanathan Subramanian, Samuel Rota Bulo, Narendra Ahuja, Oswald Lanz
We present a novel approach for jointly estimating tar- gets' head, body orientations and conversational groups called F-formations from a distant social scene (e. g., a cocktail party captured by surveillance cameras).
no code implementations • 23 Jun 2015 • Xavier Alameda-Pineda, Jacopo Staiano, Ramanathan Subramanian, Ligia Batrinca, Elisa Ricci, Bruno Lepri, Oswald Lanz, Nicu Sebe
Studying free-standing conversational groups (FCGs) in unstructured social settings (e. g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels.