1 code implementation • 15 Jun 2023 • Zhehui Huang, Sumeet Batra, Tao Chen, Rahul Krupani, Tushar Kumar, Artem Molchanov, Aleksei Petrenko, James A. Preiss, Zhaojing Yang, Gaurav S. Sukhatme
In addition to speed, such simulators need to model the physics of the robots and their interaction with the environment to a level acceptable for transferring policies learned in simulation to reality.
no code implementations • 23 May 2023 • Sumeet Batra, Bryon Tjanaka, Matthew C. Fontaine, Aleksei Petrenko, Stefanos Nikolaidis, Gaurav Sukhatme
Training generally capable agents that thoroughly explore their environment and learn new and diverse skills is a long-term goal of robot learning.
no code implementations • 20 May 2023 • Aleksei Petrenko, Arthur Allshire, Gavriel State, Ankur Handa, Viktor Makoviychuk
In this work, we propose algorithms and methods that enable learning dexterous object manipulation using simulated one- or two-armed robots equipped with multi-fingered hand end-effectors.
2 code implementations • 25 Oct 2022 • Ankur Handa, Arthur Allshire, Viktor Makoviychuk, Aleksei Petrenko, Ritvik Singh, Jingzhou Liu, Denys Makoviichuk, Karl Van Wyk, Alexander Zhurkevich, Balakumar Sundaralingam, Yashraj Narang, Jean-Francois Lafleche, Dieter Fox, Gavriel State
Our policies are trained to adapt to a wide range of conditions in simulation.
1 code implementation • 17 Jul 2021 • Aleksei Petrenko, Erik Wijmans, Brennan Shacklett, Vladlen Koltun
We present Megaverse, a new 3D simulation platform for reinforcement learning and embodied AI research.
1 code implementation • 5 Jul 2021 • Shashank Hegde, Anssi Kanervisto, Aleksei Petrenko
We are currently in the process of merging the augmented simulator with the main ViZDoom code repository.
1 code implementation • ICLR 2021 • Brennan Shacklett, Erik Wijmans, Aleksei Petrenko, Manolis Savva, Dhruv Batra, Vladlen Koltun, Kayvon Fatahalian
We accelerate deep reinforcement learning-based training in visually complex 3D environments by two orders of magnitude over prior work, realizing end-to-end training speeds of over 19, 000 frames of experience per second on a single GPU and up to 72, 000 frames per second on a single eight-GPU machine.
4 code implementations • ICML 2020 • Aleksei Petrenko, Zhehui Huang, Tushar Kumar, Gaurav Sukhatme, Vladlen Koltun
In this work we aim to solve this problem by optimizing the efficiency and resource utilization of reinforcement learning algorithms instead of relying on distributed computation.