Search Results for author: D. A. Forsyth

Found 8 papers, 1 papers with code

Make It So: Steering StyleGAN for Any Image Inversion and Editing

no code implementations27 Apr 2023 Anand Bhattad, Viraj Shah, Derek Hoiem, D. A. Forsyth

StyleGAN's disentangled style representation enables powerful image editing by manipulating the latent variables, but accurately mapping real-world images to their latent variables (GAN inversion) remains a challenge.

StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions

no code implementations20 May 2022 Anand Bhattad, D. A. Forsyth

We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.

SIRfyN: Single Image Relighting from your Neighbors

no code implementations8 Dec 2021 D. A. Forsyth, Anand Bhattad, Pranav Asthana, Yuanyi Zhong, YuXiong Wang

Novel theory shows that one can use similar scenes to estimate the different lightings that apply to a given scene, with bounded expected error.

Data Augmentation Image Relighting

Intrinsic Image Decomposition using Paradigms

no code implementations20 Nov 2020 D. A. Forsyth, Jason J. Rock

The best modern intrinsic image methods learn a map from image to albedo using rendered models and human judgements.

Intrinsic Image Decomposition

Unrestricted Adversarial Examples via Semantic Manipulation

1 code implementation ICLR 2020 Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth

Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation.

Colorization Image Captioning +1

Fast, Diverse and Accurate Image Captioning Guided By Part-of-Speech

no code implementations CVPR 2019 Aditya Deshpande, Jyoti Aneja, Li-Wei Wang, Alexander Schwing, D. A. Forsyth

We achieve the trifecta: (1) High accuracy for the diverse captions as evaluated by standard captioning metrics and user studies; (2) Faster computation of diverse captions compared to beam search and diverse beam search; and (3) High diversity as evaluated by counting novel sentences, distinct n-grams and mutual overlap (i. e., mBleu-4) scores.

Caption Generation Image Captioning

Quantitative Evaluation of Style Transfer

no code implementations31 Mar 2018 Mao-Chuang Yeh, Shuai Tang, Anand Bhattad, D. A. Forsyth

Style transfer methods produce a transferred image which is a rendering of a content image in the manner of a style image.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.