Search Results for author: Lorenzo Luzi

Found 12 papers, 3 papers with code

Using Higher-Order Moments to Assess the Quality of GAN-generated Image Features

no code implementations31 Oct 2023 Lorenzo Luzi, Helen Jenne, Ryan Murray, Carlos Ortiz Marrero

The rapid advancement of Generative Adversarial Networks (GANs) necessitates the need to robustly evaluate these models.

Self-Consuming Generative Models Go MAD

no code implementations4 Jul 2023 Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk

Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.

Frozen Overparameterization: A Double Descent Perspective on Transfer Learning of Deep Neural Networks

no code implementations20 Nov 2022 Yehuda Dar, Lorenzo Luzi, Richard G. Baraniuk

We study how the generalization behavior of transfer learning is affected by the dataset size in the source and target tasks, the number of transferred layers that are kept frozen in the target DNN training, and the similarity between the source and target tasks.

Image Classification Transfer Learning

Boomerang: Local sampling on image manifolds using diffusion models

no code implementations21 Oct 2022 Lorenzo Luzi, Paul M Mayer, Josue Casco-Rodriguez, Ali Siahkoohi, Richard G. Baraniuk

As implied by its name, Boomerang local sampling involves adding noise to an input image, moving it closer to the latent space, and then mapping it back to the image manifold through a partial reverse diffusion process.

Data Augmentation Image Enhancement +3

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Evaluating generative networks using Gaussian mixtures of image features

no code implementations8 Oct 2021 Lorenzo Luzi, Carlos Ortiz Marrero, Nile Wynar, Richard G. Baraniuk, Michael J. Henry

We define a performance measure, which we call WaM, on two sets of images by using Inception-v3 (or another classifier) to featurize the images, estimate two GMMs, and use the restricted $2$-Wasserstein distance to compare the GMMs.

Double Descent and Other Interpolation Phenomena in GANs

no code implementations7 Jun 2021 Lorenzo Luzi, Yehuda Dar, Richard Baraniuk

We show that overparameterization can improve generalization performance and accelerate the training process.

Ensembles of Generative Adversarial Networks for Disconnected Data

no code implementations25 Jun 2020 Lorenzo Luzi, Randall Balestriero, Richard G. Baraniuk

They can be represented in two ways: With an ensemble of networks or with a single network with truncated latent space.

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

no code implementations ICML 2020 Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk

We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.

A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS

no code implementations25 Sep 2019 Lorenzo Luzi, Randall Balestriero, Richard Baraniuk

We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.