Search Results for author: Deepak Gopinath

Found 8 papers, 5 papers with code

Blending Data-Driven Priors in Dynamic Games

no code implementations21 Feb 2024 Justin Lidard, Haimin Hu, Asher Hancock, Zixu Zhang, Albert Gimó Contreras, Vikash Modi, Jonathan DeCastro, Deepak Gopinath, Guy Rosman, Naomi Leonard, María Santos, Jaime Fernández Fisac

As intelligent robots like autonomous vehicles become increasingly deployed in the presence of people, the extent to which these systems should leverage model-based game-theoretic planners versus data-driven policies for safe, interaction-aware motion planning remains an open question.

Autonomous Driving Motion Planning

CIRCLE: Capture In Rich Contextual Environments

1 code implementation CVPR 2023 Joao Pedro Araujo, Jiaman Li, Karthik Vetrivel, Rishi Agarwal, Deepak Gopinath, Jiajun Wu, Alexander Clegg, C. Karen Liu

Leveraging our dataset, the model learns to use ego-centric scene information to achieve nontrivial reaching tasks in the context of complex 3D scenes.

Leveraging Demonstrations with Latent Space Priors

1 code implementation26 Oct 2022 Jonas Gehring, Deepak Gopinath, Jungdam Won, Andreas Krause, Gabriel Synnaeve, Nicolas Usunier

Starting with a learned joint latent space, we separately train a generative model of demonstration sequences and an accompanying low-level policy.

Offline RL

Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation

1 code implementation29 Mar 2022 Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Winkler, C. Karen Liu

Real-time human motion reconstruction from a sparse set of (e. g. six) wearable IMUs provides a non-intrusive and economic approach to motion capture.

Motion Estimation

MAAD: A Model and Dataset for "Attended Awareness" in Driving

1 code implementation16 Oct 2021 Deepak Gopinath, Guy Rosman, Simon Stent, Katsuya Terahata, Luke Fletcher, Brenna Argall, John Leonard

Our model takes as input scene information in the form of a video and noisy gaze estimates, and outputs visual saliency, a refined gaze estimate, and an estimate of the person's attended awareness.

Denoising

Customized Handling of Unintended Interface Operation in Assistive Robots

2 code implementations4 Jul 2020 Deepak Gopinath, Mahdieh Nejati Javaremi, Brenna D. Argall

We present an assistance system that reasons about a human's intended actions during robot teleoperation in order to provide appropriate corrections for unintended behavior.

Cannot find the paper you are looking for? You can Submit a new open access paper.