Search Results for author: Supun Samarasekera

Found 9 papers, 2 papers with code

Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo Label Self-Refinement

no code implementations25 Oct 2023 Xingchen Zhao, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-Pang Chiu, Supun Samarasekera

Recent state-of-the-art (SOTA) UDA methods employ a teacher-student self-training approach, where a teacher model is used to generate pseudo-labels for the new data which in turn guide the training process of the student model.

Pseudo Label Semantic Segmentation +1

Cross-View Visual Geo-Localization for Outdoor Augmented Reality

no code implementations28 Mar 2023 Niluthpol Chowdhury Mithun, Kshitij Minhas, Han-Pang Chiu, Taragay Oskiper, Mikhail Sizintsev, Supun Samarasekera, Rakesh Kumar

Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience.

Pose Estimation

GraphMapper: Efficient Visual Navigation by Scene Graph Generation

no code implementations17 May 2022 Zachary Seymour, Niluthpol Chowdhury Mithun, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

Understanding the geometric relationships between objects in a scene is a core capability in enabling both humans and autonomous agents to navigate in new environments.

Graph Generation Navigate +2

SASRA: Semantically-aware Spatio-temporal Reasoning Agent for Vision-and-Language Navigation in Continuous Environments

1 code implementation26 Aug 2021 Muhammad Zubair Irshad, Niluthpol Chowdhury Mithun, Zachary Seymour, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

This paper presents a novel approach for the Vision-and-Language Navigation (VLN) task in continuous 3D environments, which requires an autonomous agent to follow natural language instructions in unseen environments.

Vision and Language Navigation

RGB2LIDAR: Towards Solving Large-Scale Cross-Modal Visual Localization

1 code implementation12 Sep 2020 Niluthpol Chowdhury Mithun, Karan Sikka, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

To enable large-scale evaluation, we introduce a new dataset containing over 550K pairs (covering 143 km^2 area) of RGB and aerial LIDAR depth images.

Visual Localization

Semantically-Aware Attentive Neural Embeddings for Image-based Visual Localization

no code implementations8 Dec 2018 Zachary Seymour, Karan Sikka, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

Furthermore, we present an extensive study demonstrating the contribution of each component of our model, showing $8$--$15\%$ and $4\%$ improvement from adding semantic information and our proposed attention module.

Deep Attention Image-Based Localization +1

Utilizing Semantic Visual Landmarks for Precise Vehicle Navigation

no code implementations2 Jan 2018 Varun Murali, Han-Pang Chiu, Supun Samarasekera, Rakesh, Kumar

Experimental evaluations validate that the injection of semantic information associated with visual landmarks using our approach achieves substantial improvements in accuracy on GPS-denied navigation solutions for large-scale urban scenarios

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.