Search Results for author: Anubhav Bhatti

Found 8 papers, 1 papers with code

SM70: A Large Language Model for Medical Devices

no code implementations12 Dec 2023 Anubhav Bhatti, Surajsinh Parmar, San Lee

We are introducing SM70, a 70 billion-parameter Large Language Model that is specifically designed for SpassMed's medical devices under the brand name 'JEE1' (pronounced as G1 and means 'Life').

Decision Making Information Retrieval +2

Vital Sign Forecasting for Sepsis Patients in ICUs

no code implementations8 Nov 2023 Anubhav Bhatti, Yuwei Liu, Chen Dan, Bingjie Shen, San Lee, Yonghwan Kim, Jang Yong Kim

This paper uses state-of-the-art deep learning (DL) architectures to introduce a multi-step forecasting system to predict vital signs indicative of septic shock progression in Intensive Care Units (ICUs).

Decision Making Dynamic Time Warping

Interpreting Forecasted Vital Signs Using N-BEATS in Sepsis Patients

no code implementations24 Jun 2023 Anubhav Bhatti, Naveen Thangavelu, Marium Hassan, Choongmin Kim, San Lee, Yonghwan Kim, Jang Yong Kim

We analyze the samples where the forecasted trend does not match the actual trend and study the impact of infused drugs on changing the actual vital signs compared to the forecasted trend.

Dynamic Time Warping

Multimodal Brain-Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines

1 code implementation9 Apr 2023 Prithila Angkan, Behnam Behinaein, Zunayed Mahmud, Anubhav Bhatti, Dirk Rodenburg, Paul Hungler, Ali Etemad

Through this paper, we introduce a novel driver cognitive load assessment dataset, CL-Drive, which contains Electroencephalogram (EEG) signals along with other physiological signals such as Electrocardiography (ECG) and Electrodermal Activity (EDA) as well as eye tracking data.

Brain Computer Interface EEG +1

AttX: Attentive Cross-Connections for Fusion of Wearable Signals in Emotion Recognition

no code implementations9 Jun 2022 Anubhav Bhatti, Behnam Behinaein, Paul Hungler, Ali Etemad

We perform extensive experiments on three public multimodal wearable datasets, WESAD, SWELL-KW, and CASE, and demonstrate that our method can effectively regulate and share information between different modalities to learn better representations.

Emotion Recognition Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.