Search Results for author: Gaurav Parmar

Found 7 papers, 4 papers with code

On the Content Bias in Fréchet Video Distance

no code implementations18 Apr 2024 Songwei Ge, Aniruddha Mahapatra, Gaurav Parmar, Jun-Yan Zhu, Jia-Bin Huang

We show that FVD with features extracted from the recent large-scale self-supervised video models is less biased toward image quality.

Video Generation

One-Step Image Translation with Text-to-Image Models

1 code implementation18 Mar 2024 Gaurav Parmar, Taesung Park, Srinivasa Narasimhan, Jun-Yan Zhu

In this work, we address two limitations of existing conditional diffusion models: their slow inference speed due to the iterative denoising process and their reliance on paired data for model fine-tuning.

Denoising Translation

Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing

1 code implementation CVPR 2022 Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh

We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.

On Aliased Resizing and Surprising Subtleties in GAN Evaluation

3 code implementations CVPR 2022 Gaurav Parmar, Richard Zhang, Jun-Yan Zhu

Furthermore, we show that if compression is used on real training images, FID can actually improve if the generated images are also subsequently compressed.

Image Generation

Dual Contradistinctive Generative Autoencoder

no code implementations CVPR 2021 Gaurav Parmar, Dacheng Li, Kwonjoon Lee, Zhuowen Tu

Our model, named dual contradistinctive generative autoencoder (DC-VAE), integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for there construction/synthesis), both being contradistinctive.

Image Generation Image Reconstruction +1

Guided Variational Autoencoder for Disentanglement Learning

no code implementations CVPR 2020 Zheng Ding, Yifan Xu, Weijian Xu, Gaurav Parmar, Yang Yang, Max Welling, Zhuowen Tu

We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.

Disentanglement General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.