Search Results for author: Viktor Makoviychuk

Found 13 papers, 7 papers with code

DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training

no code implementations20 May 2023 Aleksei Petrenko, Arthur Allshire, Gavriel State, Ankur Handa, Viktor Makoviychuk

In this work, we propose algorithms and methods that enable learning dexterous object manipulation using simulated one- or two-armed robots equipped with multi-fingered hand end-effectors.

Object

Accelerated Policy Learning with Parallel Differentiable Simulation

no code implementations ICLR 2022 Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, Miles Macklin

In this work we present a high-performance differentiable simulator and a new policy learning algorithm (SHAC) that can effectively leverage simulation gradients, even in the presence of non-smoothness.

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

no code implementations27 Oct 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions.

Multi-agent Reinforcement Learning reinforcement-learning +1

OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

1 code implementation2 Oct 2021 Josiah Wong, Viktor Makoviychuk, Anima Anandkumar, Yuke Zhu

Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.

Robot Manipulation

Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger

1 code implementation22 Aug 2021 Arthur Allshire, Mayank Mittal, Varun Lodaya, Viktor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, Animesh Garg

We present a system for learning a challenging dexterous manipulation task involving moving a cube to an arbitrary 6-DoF pose with only 3-fingers trained with NVIDIA's IsaacGym simulator.

Position

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

no code implementations31 May 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

Algorithms derived from Tesseract decompose the Q-tensor across agents and utilise low-rank tensor approximations to model agent interactions relevant to the task.

Learning Theory Multi-agent Reinforcement Learning +3

Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?

6 code implementations18 Nov 2020 Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, Shimon Whiteson

Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function.

reinforcement-learning Reinforcement Learning (RL) +2

Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience

no code implementations12 Oct 2018 Yevgen Chebotar, Ankur Handa, Viktor Makoviychuk, Miles Macklin, Jan Issac, Nathan Ratliff, Dieter Fox

In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world.

GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning

no code implementations12 Oct 2018 Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, Dieter Fox

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.