no code implementations • 24 Aug 2023 • Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast "world knowledge".
no code implementations • 17 Apr 2023 • Rishi Hazra, Luc De Raedt
By resorting to a neuro-symbolic approach, DERRL combines relational representations and constraints from symbolic planning with deep learning to extract interpretable policies.
1 code implementation • ICCV 2023 • Rishi Hazra, Brian Chen, Akshara Rai, Nitin Kamra, Ruta Desai
The goal in EgoTV is to verify the execution of tasks from egocentric videos based on the natural language description of these tasks.
1 code implementation • 11 May 2021 • Rishi Hazra, Sonu Dixit, Sayambhu Sen
Human language has been described as a system that makes \textit{use of finite means to express an unlimited array of thoughts}.
1 code implementation • 9 May 2021 • Rishi Hazra, Sonu Dixit
It comprises a 2-d grid environment with a set of agents (a stationary speaker and a mobile listener connected via a communication channel) exposed to a continuous array of tasks in a partially observable setting.
no code implementations • NAACL 2021 • Rishi Hazra, Parag Dutta, Shubham Gupta, Mohammed Abdul Qaathir, Ambedkar Dukkipati
We empirically demonstrate that the proposed approach is further able to reduce the data requirements of state-of-the-art AL strategies by an absolute percentage reduction of $\approx\mathbf{3-25\%}$ on multiple NLP tasks while achieving the same performance with no additional computation overhead.
1 code implementation • 9 Dec 2020 • Rishi Hazra, Sonu Dixit, Sayambhu Sen
To deal with this, existing works have proposed a limited channel capacity as an important constraint for learning highly compositional languages.
no code implementations • 6 Apr 2020 • Shubham Gupta, Rishi Hazra, Ambedkar Dukkipati
One way to coordinate is by learning to communicate with each other.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 1 Nov 2019 • Rishi Hazra, Parag Dutta, Shubham Gupta, Mohammed Abdul Qaathir, Ambedkar Dukkipati
We empirically demonstrate that the proposed approach is further able to reduce the data requirements of state-of-the-art AL strategies by $\approx \mathbf{3-25\%}$ on an absolute scale on multiple NLP tasks while achieving the same performance with virtually no additional computation overhead.