Search Results for author: Fereshteh Forghani

Found 3 papers, 1 papers with code

Can Generative Models Improve Self-Supervised Representation Learning?

no code implementations9 Mar 2024 Arash Afkanpour, Vahid Reza Khazaie, Sana Ayromlou, Fereshteh Forghani

By directly conditioning generative models on a source image representation, our method enables the generation of diverse augmentations while maintaining the semantics of the source image, thus offering a richer set of data for self-supervised learning.

Representation Learning Self-Supervised Learning

PolyOculus: Simultaneous Multi-view Image-based Novel View Synthesis

no code implementations28 Feb 2024 Jason J. Yu, Tristan Aumentado-Armstrong, Fereshteh Forghani, Konstantinos G. Derpanis, Marcus A. Brubaker

This paper considers the problem of generative novel view synthesis (GNVS), generating novel, plausible views of a scene given a limited number of known views.

Novel View Synthesis

Long-Term Photometric Consistent Novel View Synthesis with Diffusion Models

1 code implementation ICCV 2023 Jason J. Yu, Fereshteh Forghani, Konstantinos G. Derpanis, Marcus A. Brubaker

In this paper, we propose a novel generative model capable of producing a sequence of photorealistic images consistent with a specified camera trajectory, and a single starting image.

Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.