Search Results for author: Santhosh K. Ramakrishnan

Found 9 papers, 4 papers with code

Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation

no code implementations CVPR 2022 Ziad Al-Halah, Santhosh K. Ramakrishnan, Kristen Grauman

In reinforcement learning for visual navigation, it is common to develop a model for each new task, and train that model from scratch with task-specific interactions in 3D environments.

Transfer Learning Visual Navigation

Environment Predictive Coding for Embodied Agents

no code implementations3 Feb 2021 Santhosh K. Ramakrishnan, Tushar Nagarajan, Ziad Al-Halah, Kristen Grauman

We introduce environment predictive coding, a self-supervised approach to learn environment-level representations for embodied agents.

Self-Supervised Learning

Occupancy Anticipation for Efficient Exploration and Navigation

1 code implementation ECCV 2020 Santhosh K. Ramakrishnan, Ziad Al-Halah, Kristen Grauman

State-of-the-art navigation methods leverage a spatial memory to generalize to new environments, but their occupancy maps are limited to capturing the geometric structures directly observed by the agent.

Decision Making Efficient Exploration +1

An Exploration of Embodied Visual Exploration

1 code implementation7 Jan 2020 Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman

Embodied computer vision considers perception for robots in novel, unstructured environments.

Benchmarking

Emergence of Exploratory Look-Around Behaviors through Active Observation Completion

1 code implementation Science Robotics 2019 Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman

Standard computer vision systems assume access to intelligently captured inputs (e. g., photos from a human photographer), yet autonomously capturing good observations is a major challenge in itself.

Active Observation Completion

Sidekick Policy Learning for Active Visual Exploration

no code implementations ECCV 2018 Santhosh K. Ramakrishnan, Kristen Grauman

We consider an active visual exploration scenario, where an agent must intelligently select its camera motions to efficiently reconstruct the full environment from only a limited set of narrow field-of-view glimpses.

CoMaL Tracking: Tracking Points at the Object Boundaries

no code implementations7 Jun 2017 Santhosh K. Ramakrishnan, Swarna Kamlam Ravindran, Anurag Mittal

Experiments show improvements over a simple re-detect-and-match framework as well as KLT in terms of speed/accuracy on different real-world applications, especially at the object boundaries.

Object Point Tracking

An Empirical Evaluation of Visual Question Answering for Novel Objects

no code implementations CVPR 2017 Santhosh K. Ramakrishnan, Ambar Pal, Gaurav Sharma, Anurag Mittal

We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data.

Question Answering Visual Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.