Search Results for author: Muhammad Zubair Irshad

Found 8 papers, 5 papers with code

NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields

no code implementations1 Apr 2024 Muhammad Zubair Irshad, Sergey Zakahrov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rares Ambrus

Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images.

3D Object Detection object-detection +3

FSD: Fast Self-Supervised Single RGB-D to Categorical 3D Objects

no code implementations19 Oct 2023 Mayank Lunayach, Sergey Zakharov, Dian Chen, Rares Ambrus, Zsolt Kira, Muhammad Zubair Irshad

In this work, we address the challenging task of 3D object recognition without the reliance on real-world 3D labeled data.

3D Object Recognition 6D Pose Estimation

NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes

1 code implementation ICCV 2023 Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Vitor Guizilini, Thomas Kollar, Adrien Gaidon, Zsolt Kira, Rares Ambrus

NeO 360's representation allows us to learn from a large collection of unbounded 3D scenes while offering generalizability to new views and novel scenes from as few as a single image during inference.

Generalizable Novel View Synthesis Novel View Synthesis

SASRA: Semantically-aware Spatio-temporal Reasoning Agent for Vision-and-Language Navigation in Continuous Environments

1 code implementation26 Aug 2021 Muhammad Zubair Irshad, Niluthpol Chowdhury Mithun, Zachary Seymour, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

This paper presents a novel approach for the Vision-and-Language Navigation (VLN) task in continuous 3D environments, which requires an autonomous agent to follow natural language instructions in unseen environments.

Vision and Language Navigation

Cannot find the paper you are looking for? You can Submit a new open access paper.