no code implementations • 19 Mar 2024 • Zaid Tasneem, Akshat Dave, Abhishek Singh, Kushagra Tiwary, Praneeth Vepakomma, Ashok Veeraraghavan, Ramesh Raskar
It learns photorealistic scene representations by decomposing users' 3D views into personal and global NeRFs and a novel optimally weighted aggregation of only the latter.
no code implementations • 24 Dec 2023 • Nikhil Behari, Akshat Dave, Kushagra Tiwary, William Yang, Ramesh Raskar
3D modeling from satellite imagery is essential in areas of environmental science, urban planning, agriculture, and disaster response.
no code implementations • ICCV 2023 • Tzofi Klinghoffer, Kushagra Tiwary, Nikhil Behari, Bhavya Agrawalla, Ramesh Raskar
In this paper, we formulate these four building blocks of imaging systems as a context-free grammar (CFG), which can be automatically searched over with a learned camera designer to jointly optimize the imaging system with task-specific perception models.
no code implementations • CVPR 2023 • Kushagra Tiwary, Akshat Dave, Nikhil Behari, Tzofi Klinghoffer, Ashok Veeraraghavan, Ramesh Raskar
By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e. g. from reflections on the human eye.
1 code implementation • 8 Dec 2022 • Kushagra Tiwary, Akshat Dave, Nikhil Behari, Tzofi Klinghoffer, Ashok Veeraraghavan, Ramesh Raskar
By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e. g. from reflections on the human eye.
no code implementations • 21 Apr 2022 • Tzofi Klinghoffer, Siddharth Somasundaram, Kushagra Tiwary, Ramesh Raskar
Cameras were originally designed using physics-based heuristics to capture aesthetic images.
1 code implementation • 11 Apr 2022 • Tzofi Klinghoffer, Kushagra Tiwary, Arkadiusz Balata, Vivek Sharma, Ramesh Raskar
In this paper, we show the utility of inverse rendering in learning representations that yield improved accuracy on downstream clustering, linear classification, and segmentation tasks with the help of our novel Leave-One-Out, Cycle Contrastive loss (LOOCC), which improves disentanglement of scene parameters and robustness to out-of-distribution lighting and viewpoints.
no code implementations • 29 Mar 2022 • Kushagra Tiwary, Tzofi Klinghoffer, Ramesh Raskar
We observe that shadows are a powerful cue that can constrain neural scene representations to learn SfS, and even outperform NeRF to reconstruct otherwise hidden geometry.