Search Results for author: George Cazenavette

Found 7 papers, 4 papers with code

Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching

1 code implementation9 Oct 2023 Ziyao Guo, Kai Wang, George Cazenavette, Hui Li, Kaipeng Zhang, Yang You

The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a model trained on this synthetic set will perform equally well as a model trained on the full, real dataset.

Dataset Distillation by Matching Training Trajectories

5 code implementations CVPR 2022 George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu

To efficiently obtain the initial and target network parameters for large-scale datasets, we pre-compute and store training trajectories of expert networks trained on the real dataset.

MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation

1 code implementation28 May 2021 George Cazenavette, Manuel Ladron De Guevara

While attention-based transformer networks achieve unparalleled success in nearly all language tasks, the large number of tokens (pixels) found in images coupled with the quadratic activation memory usage makes them prohibitive for problems in computer vision.

Image-to-Image Translation Translation

On the Bias Against Inductive Biases

no code implementations28 May 2021 George Cazenavette, Simon Lucey

Borrowing from the transformer models that revolutionized the field of natural language processing, self-supervised feature learning for visual tasks has also seen state-of-the-art success using these extremely deep, isotropic networks.

Reframing Neural Networks: Deep Structure in Overcomplete Representations

no code implementations10 Mar 2021 Calvin Murdock, George Cazenavette, Simon Lucey

In comparison to classical shallow representation learning techniques, deep neural networks have achieved superior performance in nearly every application benchmark.

Adversarial Robustness Model Selection +1

Architectural Adversarial Robustness: The Case for Deep Pursuit

no code implementations CVPR 2021 George Cazenavette, Calvin Murdock, Simon Lucey

Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.