Search Results for author: Carl-Johann Simon-Gabriel

Found 15 papers, 8 papers with code

Robust NAS under adversarial training: benchmark, theory, and beyond

no code implementations19 Mar 2024 Yongtao Wu, Fanghui Liu, Carl-Johann Simon-Gabriel, Grigorios G Chrysos, Volkan Cevher

Recent developments in neural architecture search (NAS) emphasize the significance of considering robust architectures against malicious data.

Learning Theory Neural Architecture Search

Unsupervised Open-Vocabulary Object Localization in Videos

no code implementations ICCV 2023 Ke Fan, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel, Mike Zheng Shou, Francesco Locatello, Bernt Schiele, Thomas Brox, Zheng Zhang, Yanwei Fu, Tong He

In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization.

Object Object Localization +1

Object-Centric Multiple Object Tracking

1 code implementation ICCV 2023 Zixu Zhao, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik Zietlow, Carl-Johann Simon-Gabriel, Bing Shuai, Zhuowen Tu, Thomas Brox, Bernt Schiele, Yanwei Fu, Francesco Locatello, Zheng Zhang, Tianjun Xiao

Unsupervised object-centric learning methods allow the partitioning of scenes into entities without additional localization information and are excellent candidates for reducing the annotation burden of multiple-object tracking (MOT) pipelines.

Multiple Object Tracking Object +3

Targeted Separation and Convergence with Kernel Discrepancies

no code implementations26 Sep 2022 Alessandro Barp, Carl-Johann Simon-Gabriel, Mark Girolami, Lester Mackey

Maximum mean discrepancies (MMDs) like the kernel Stein discrepancy (KSD) have grown central to a wide range of applications, including hypothesis testing, sampler selection, distribution approximation, and variational inference.

Variational Inference

Assaying Out-Of-Distribution Generalization in Transfer Learning

1 code implementation19 Jul 2022 Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness Out-of-Distribution Generalization +1

PopSkipJump: Decision-Based Attack for Probabilistic Classifiers

1 code implementation14 Jun 2021 Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause

Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output.

Metrizing Weak Convergence with Maximum Mean Discrepancies

no code implementations16 Jun 2020 Carl-Johann Simon-Gabriel, Alessandro Barp, Bernhard Schölkopf, Lester Mackey

More precisely, we prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, whose reproducing kernel Hilbert space (RKHS) functions vanish at infinity, metrizes the weak convergence of probability measures if and only if k is continuous and integrally strictly positive definite (i. s. p. d.)

Adversarial Vulnerability of Neural Networks Increases with Input Dimension

no code implementations ICLR 2019 Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz

Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.

First-order Adversarial Vulnerability of Neural Networks and Input Dimension

1 code implementation ICLR 2019 Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.

From optimal transport to generative modeling: the VEGAN cookbook

1 code implementation22 May 2017 Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, Bernhard Schoelkopf

We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution $P_X$ and the latent variable model distribution $P_G$.

AdaGAN: Boosting Generative Models

1 code implementation NeurIPS 2017 Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.

Cannot find the paper you are looking for? You can Submit a new open access paper.