Search Results for author: Teruhisa Misu

Found 16 papers, 2 papers with code

Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

no code implementations15 Dec 2023 Minxue Niu, Zhaobo Zheng, Kumar Akash, Teruhisa Misu

Humans' internal states play a key role in human-machine interaction, leading to the rise of human state estimation as a prominent field.

Autonomous Vehicles Time Series

Identification of Adaptive Driving Style Preference through Implicit Inputs in SAE L2 Vehicles

no code implementations21 Sep 2022 Zhaobo K. Zheng, Kumar Akash, Teruhisa Misu, Vidya Krishmoorthy, Miaomiao Dong, Yuni Lee, Gaojian Huang

This work proposes identification of user driving style preference with multimodal signals, so the vehicle could match user preference in a continuous and automatic way.

Effects of Augmented-Reality-Based Assisting Interfaces on Drivers' Object-wise Situational Awareness in Highly Autonomous Vehicles

no code implementations6 Jun 2022 Xiaofeng Gao, Xingwei Wu, Samson Ho, Teruhisa Misu, Kumar Akash

To understand the effect of highlighting on drivers' SA for objects with different types and locations under various traffic densities, we conducted an in-person experiment with 20 participants on a driving simulator.

Autonomous Driving Object

Driving Anomaly Detection Using Conditional Generative Adversarial Network

no code implementations15 Mar 2022 Yuning Qiu, Teruhisa Misu, Carlos Busso

The experimental results reveal that recordings annotated with events that are likely to be anomalous, such as avoiding on-road pedestrians and traffic rule violations, have higher anomaly scores than recordings without any event annotation.

Anomaly Detection Generative Adversarial Network

Learning Temporally and Semantically Consistent Unpaired Video-to-video Translation Through Pseudo-Supervision From Synthetic Optical Flow

1 code implementation15 Jan 2022 Kaihong Wang, Kumar Akash, Teruhisa Misu

In this work, we propose a novel paradigm that regularizes the spatiotemporal consistency by synthesizing motions in input videos with the generated optical flow instead of estimating them.

Motion Estimation Optical Flow Estimation +1

Grounding Human-to-Vehicle Advice for Self-driving Vehicles

no code implementations CVPR 2019 Jinkyu Kim, Teruhisa Misu, Yi-Ting Chen, Ashish Tawari, John Canny

We show that taking advice improves the performance of the end-to-end network, while the network cues on a variety of visual features that are provided by advice.

Deep Multi-Task Learning for Anomalous Driving Detection Using CAN Bus Scalar Sensor Data

no code implementations28 Jun 2019 Vidyasagar Sadhu, Teruhisa Misu, Dario Pompili

In this paper, we present a novel multi-task learning based approach that leverages domain-knowledge (maneuver labels) for anomaly detection in driving data.

Multi-Task Learning Semi-supervised Anomaly Detection +1

Toward Driving Scene Understanding: A Dataset for Learning Driver Behavior and Causal Reasoning

no code implementations CVPR 2018 Vasili Ramanishka, Yi-Ting Chen, Teruhisa Misu, Kate Saenko

We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments.

Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.