Search Results for author: Peter Karkus

Found 17 papers, 5 papers with code

Interactive Joint Planning for Autonomous Vehicles

no code implementations27 Oct 2023 Yuxiao Chen, Sushant Veer, Peter Karkus, Marco Pavone

In particular, IJP jointly optimizes over the behavior of the ego and the surrounding agents and leverages deep-learned prediction models as prediction priors that the join trajectory optimization tries to stay close to.

Autonomous Vehicles Model Predictive Control +2

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.

Object

DiffStack: A Differentiable and Modular Control Stack for Autonomous Vehicles

no code implementations13 Dec 2022 Peter Karkus, Boris Ivanovic, Shie Mannor, Marco Pavone

To enable the joint optimization of AV stacks while retaining modularity, we present DiffStack, a differentiable and modular stack for prediction, planning, and control.

Autonomous Vehicles

Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation

no code implementations CVPR 2021 Peter Karkus, Shaojun Cai, David Hsu

We introduce the Differentiable SLAM Network (SLAM-net) along with a navigation architecture to enable planar robot navigation in previously unseen indoor environments.

Robot Navigation Simultaneous Localization and Mapping +1

Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban

no code implementations3 Oct 2020 Peter Karkus, Mehdi Mirza, Arthur Guez, Andrew Jaegle, Timothy Lillicrap, Lars Buesing, Nicolas Heess, Theophane Weber

We explore whether integrated tasks like Mujoban can be solved by composing RL modules together in a sense-plan-act hierarchy, where modules have well-defined roles similarly to classic robot architectures.

reinforcement-learning Reinforcement Learning (RL)

Differentiable Mapping Networks: Learning Structured Map Representations for Sparse Visual Localization

no code implementations19 May 2020 Peter Karkus, Anelia Angelova, Vincent Vanhoucke, Rico Jonschkowski

We address these tasks by combining spatial structure (differentiable mapping) and end-to-end learning in a novel neural network architecture: the Differentiable Mapping Network (DMN).

Visual Localization

Discriminative Particle Filter Reinforcement Learning for Complex Partial Observations

1 code implementation ICLR 2020 Xiao Ma, Peter Karkus, David Hsu, Wee Sun Lee, Nan Ye

The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making.

Atari Games Decision Making +3

Particle Filter Recurrent Neural Networks

1 code implementation30 May 2019 Xiao Ma, Peter Karkus, David Hsu, Wee Sun Lee

Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data.

General Classification Stock Price Prediction +2

Differentiable Algorithm Networks for Composable Robot Learning

no code implementations28 May 2019 Peter Karkus, Xiao Ma, David Hsu, Leslie Pack Kaelbling, Wee Sun Lee, Tomas Lozano-Perez

This paper introduces the Differentiable Algorithm Network (DAN), a composable architecture for robot learning systems.

Navigate

Integrating Algorithmic Planning and Deep Learning for Partially Observable Navigation

no code implementations17 Jul 2018 Peter Karkus, David Hsu, Wee Sun Lee

We propose to take a novel approach to robot system design where each building block of a larger system is represented as a differentiable program, i. e. a deep neural network.

Navigate Robot Navigation

Particle Filter Networks with Application to Visual Localization

2 code implementations23 May 2018 Peter Karkus, David Hsu, Wee Sun Lee

Particle filtering is a powerful approach to sequential state estimation and finds application in many domains, including robot localization, object tracking, etc.

Object Tracking Visual Localization

QMDP-Net: Deep Learning for Planning under Partial Observability

2 code implementations NeurIPS 2017 Peter Karkus, David Hsu, Wee Sun Lee

It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture.

Factored Contextual Policy Search with Bayesian Optimization

no code implementations6 Dec 2016 Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee

Scarce data is a major challenge to scaling robot learning to truly complex tasks, as we need to generalize locally learned policies over different "contexts".

Active Learning Bayesian Optimization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.