Search Results for author: M Ganesh Kumar

Found 4 papers, 4 papers with code

Compositional Learning of Visually-Grounded Concepts Using Reinforcement

1 code implementation8 Sep 2023 Zijun Lin, Haidi Azaman, M Ganesh Kumar, Cheston Tan

Overall, our results are the first to demonstrate that RL agents can be trained to implicitly learn concepts and compositionality, to solve more complex environments in zero-shot fashion.

Navigate reinforcement-learning +2

DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using Determiners

1 code implementation ICCV 2023 Clarence Lee, M Ganesh Kumar, Cheston Tan

We find that current state-of-the-art visual grounding models do not perform well on the dataset, highlighting the limitations of existing models on reference and quantification tasks.

Visual Grounding

A nonlinear hidden layer enables actor-critic agents to learn multiple paired association navigation

1 code implementation25 Jun 2021 M Ganesh Kumar, Cheston Tan, Camilo Libedinsky, Shih-Cheng Yen, Andrew Yong-Yi Tan

Biologically plausible classic actor-critic agents have been shown to learn to navigate to single reward locations, but which biologically plausible agents are able to learn multiple cue-reward location tasks has remained unclear.

Navigate

One-shot learning of paired association navigation with biologically plausible schemas

2 code implementations7 Jun 2021 M Ganesh Kumar, Cheston Tan, Camilo Libedinsky, Shih-Cheng Yen, Andrew Yong-Yi Tan

But how schemas, conceptualized at Marr's computational level, correspond with neural implementations remains poorly understood, and a biologically plausible computational model of the rodent learning has not been demonstrated.

One-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.