no code implementations • 15 Feb 2023 • William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina
This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision.
1 code implementation • 26 Oct 2022 • Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
Federated learning (FL) allows multiple clients to collaboratively train a deep learning model.
1 code implementation • 20 Jun 2022 • Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications.
no code implementations • 9 Mar 2022 • Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
We develop a novel method for ensuring fairness in machine learning which we term as the Renyi Fair Information Bottleneck (RFIB).
no code implementations • 3 Mar 2022 • William Paul, Philippe Burlina
We tackle here a specific, still not widely addressed aspect, of AI robustness, which consists of seeking invariance / insensitivity of model performance to hidden factors of variations in the data.
no code implementations • 28 Feb 2022 • Haolin Yuan, Armin Hadzic, William Paul, Daniella Villegas de Flores, Philip Mathew, John Aucott, Yinzhi Cao, Philippe Burlina
Skin lesions can be an early indicator of a wide range of infectious and other diseases.
no code implementations • 16 Aug 2021 • Max Lennon, Nathan Drenkow, Philippe Burlina
To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time.
no code implementations • 28 Jul 2021 • William Paul, Philippe Burlina
We also demonstrate how adaptation to real factors of variations can be performed in the semi-supervised case where some target factor labels are known, via automated intervention selection.
1 code implementation • 5 Jan 2021 • Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Neil Fendley, Philippe Burlina
We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz
We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.
no code implementations • 3 Jun 2020 • Himesh Bhatia, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
Another novel GAN generator loss function is next proposed in terms of R\'{e}nyi cross-entropy functionals with order $\alpha >0$, $\alpha\neq 1$.
no code implementations • 1 May 2020 • Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow
We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.
no code implementations • 28 Apr 2020 • Philippe Burlina, Neil Joshi, William Paul, Katia D. Pacheco, Neil M. Bressler
Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72. 0% (65. 8%, 78. 2%), and for darker-skin, of 71. 5% (65. 2%, 77. 8%), demonstrating closer parity (delta=0. 5%) in accuracy across subpopulations (Welch t-test t=0. 111, P=. 912).
no code implementations • 25 Feb 2020 • William Paul, I-Jeng Wang, Fady Alajaji, Philippe Burlina
Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a).
no code implementations • CVPR 2019 • Philippe Burlina, Neil Joshi, I-Jeng Wang
We develop a framework for novelty detection (ND) methods relying on deep embeddings, either discriminative or generative, and also propose a novel framework for assessing their performance.
no code implementations • 6 Mar 2018 • Kapil Katyal, Katie Popek, Chris Paxton, Joseph Moore, Kevin Wolfe, Philippe Burlina, Gregory D. Hager
In these situations, the robot's ability to reason about its future motion is often severely limited by sensor field of view (FOV).