no code implementations • 26 Feb 2024 • Leonid Boytsov, Ameya Joshi, Filipe Condessa
By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking.
no code implementations • 14 Nov 2023 • Xidong Wu, Wan-Yi Lin, Devin Willmott, Filipe Condessa, Yufei Huang, Zhenzhen Li, Madan Ravi Ganesh
Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data.
no code implementations • CVPR 2021 • Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter
Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities.
no code implementations • 12 May 2022 • Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde
Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.
no code implementations • ICML Workshop AML 2021 • Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter
Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.
no code implementations • 29 Jan 2021 • Devin Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe Condessa, Zico Kolter
In this work, we instead show that it is possible to craft (universal) adversarial perturbations in the black-box setting by querying a sequence of different images only once.
no code implementations • 22 Apr 2020 • Filipe Condessa, Zico Kolter
In this paper, we propose a method for training provably robust generative models, specifically a provably robust version of the variational auto-encoder (VAE).
no code implementations • 3 Sep 2015 • Filipe Condessa, José Bioucas-Dias, Carlos Castro, John Ozolek, Jelena Kovačević
We introduce a new supervised algorithm for image classification with rejection using multiscale contextual information.
no code implementations • 29 Apr 2015 • Filipe Condessa, Jose Bioucas-Dias, Jelena Kovacevic
We validate our method in real hyperspectral data and show that the performance gains obtained from the rejection fields are equivalent to an increase the dimension of the training sets.
no code implementations • 27 Apr 2015 • Filipe Condessa, Jose Bioucas-Dias, Jelena Kovacevic
We present a supervised hyperspectral image segmentation algorithm based on a convex formulation of a marginal maximum a posteriori segmentation with hidden fields and structure tensor regularization: Segmentation via the Constraint Split Augmented Lagrangian Shrinkage by Structure Tensor Regularization (SegSALSA-STR).
no code implementations • 10 Apr 2015 • Filipe Condessa, Jelena Kovacevic, Jose Bioucas-Dias
Classifiers with rejection are essential in real-world applications where misclassifications and their effects are critical.