Search Results for author: Valts Blukis

Found 16 papers, 9 papers with code

Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects

1 code implementation1 Apr 2024 Yijia Weng, Bowen Wen, Jonathan Tremblay, Valts Blukis, Dieter Fox, Leonidas Guibas, Stan Birchfield

We address the problem of building digital twins of unknown articulated objects from two RGBD scans of the object at different articulation states.

Object

Diff-DOPE: Differentiable Deep Object Pose Estimation

no code implementations30 Sep 2023 Jonathan Tremblay, Bowen Wen, Valts Blukis, Balakumar Sundaralingam, Stephen Tyree, Stan Birchfield

We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object.

Object Pose Estimation +1

RVT: Robotic View Transformer for 3D Object Manipulation

1 code implementation26 Jun 2023 Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox

In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct).

Object Robot Manipulation

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.

Object

TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation

no code implementations CVPR 2023 Taeyeop Lee, Jonathan Tremblay, Valts Blukis, Bowen Wen, Byeong-Uk Lee, Inkyu Shin, Stan Birchfield, In So Kweon, Kuk-Jin Yoon

Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime.

Object Pose Estimation +2

BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects

1 code implementation CVPR 2023 Bowen Wen, Jonathan Tremblay, Valts Blukis, Stephen Tyree, Thomas Muller, Alex Evans, Dieter Fox, Jan Kautz, Stan Birchfield

We present a near real-time method for 6-DoF tracking of an unknown object from a monocular RGBD video sequence, while simultaneously performing neural 3D reconstruction of the object.

3D Object Tracking 3D Reconstruction +5

One-Shot Neural Fields for 3D Object Understanding

no code implementations21 Oct 2022 Valts Blukis, Taeyeop Lee, Jonathan Tremblay, Bowen Wen, In So Kweon, Kuk-Jin Yoon, Dieter Fox, Stan Birchfield

At test-time, we build the representation from a single RGB input image observing the scene from only one viewpoint.

3D Reconstruction Object +2

ProgPrompt: Generating Situated Robot Task Plans using Large Language Models

no code implementations22 Sep 2022 Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg

To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.

A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution

1 code implementation12 Jul 2021 Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, Yoav Artzi

Natural language provides an accessible and expressive interface to specify long-term tasks for robotic agents.

Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following

1 code implementation14 Nov 2020 Valts Blukis, Ross A. Knepper, Yoav Artzi

We study the problem of learning a robot policy to follow natural language instructions that can be easily extended to reason about new objects.

Continuous Control Instruction Following

Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated Flight

1 code implementation21 Oct 2019 Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A. Knepper, Yoav Artzi

Learning uses both simulation and real environments without requiring autonomous flight in the physical environment during training, and combines supervised learning for predicting positions to visit and reinforcement learning for continuous control.

Continuous Control Instruction Following +2

Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

1 code implementation10 Nov 2018 Valts Blukis, Dipendra Misra, Ross A. Knepper, Yoav Artzi

We propose an approach for mapping natural language instructions and raw observations to continuous control of a quadcopter drone.

Continuous Control Imitation Learning +2

Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning

1 code implementation31 May 2018 Valts Blukis, Nataly Brukhim, Andrew Bennett, Ross A. Knepper, Yoav Artzi

We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control.

Imitation Learning Instruction Following

Cannot find the paper you are looking for? You can Submit a new open access paper.