no code implementations • 19 Dec 2023 • Kiran Lekkala, Henghui Bao, Sumedh Sontakke, Laurent Itti
We propose Value Explicit Pretraining (VEP), a method that learns generalizable representations for transfer reinforcement learning.
no code implementations • 22 Nov 2023 • Kiran Lekkala, Eshan Bhargava, Yunhao Ge, Laurent Itti
We create a novel benchmark for evaluating a Deployable Lifelong Learning system for Visual Reinforcement Learning (RL) that is pretrained on a curated dataset, and propose a novel Scalable Lifelong Learning system capable of retaining knowledge from the previously learnt RL tasks.
no code implementations • 28 Oct 2023 • Kiran Lekkala, Chen Liu, Laurent Itti
We trained the model using data from a Differential drive robot in the CARLA simulator.
1 code implementation • 24 May 2023 • Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti
We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.
no code implementations • 20 Jan 2022 • Shixian Wen, Amanda Sofie Rios, Kiran Lekkala, Laurent Itti
Hence, we propose a two-stage Super-Sub framework, and demonstrate that: (i) The framework improves overall classification performance by 3. 3%, by first inferring a superclass using a generalist superclass-level network, and then using a specialized network for final subclass-level classification.
no code implementations • 30 May 2021 • Kiran Lekkala, Laurent Itti
In this paper, we try to improve exploration in Blackbox methods, particularly Evolution strategies (ES), when applied to Reinforcement Learning (RL) problems where intermediate waypoints/subgoals are available.
no code implementations • 12 Jun 2020 • Kiran Lekkala, Laurent Itti
Our method improves performance on new, previously unseen environments, and is 1. 5x faster than standard existing meta learning methods using similar architectures.
no code implementations • 23 Nov 2019 • Kiran Lekkala, Sami Abu-El-Haija, Laurent Itti
Imitation learning has gained immense popularity because of its high sample-efficiency.