Search Results for author: Fady Alajaji

Found 11 papers, 1 papers with code

A Unifying Generator Loss Function for Generative Adversarial Networks

no code implementations14 Aug 2023 Justin Veiner, Fady Alajaji, Bahman Gharesifard

A unifying $\alpha$-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN), which uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system.

Generative Adversarial Network

Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility

no code implementations15 Feb 2023 William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina

This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision.

Attribute Fairness

On the Rényi Cross-Entropy

no code implementations28 Jun 2022 Ferenc Cole Thierrin, Fady Alajaji, Tamás Linder

The R\'{e}nyi cross-entropy measure between two distributions, a generalization of the Shannon cross-entropy, was recently used as a loss function for the improved design of deep learning generative adversarial networks.

Gaussian Processes

Classification Utility, Fairness, and Compactness via Tunable Information Bottleneck and Rényi Measures

1 code implementation20 Jun 2022 Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications.

Attribute Fairness +2

Renyi Fair Information Bottleneck for Image Classification

no code implementations9 Mar 2022 Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

We develop a novel method for ensuring fairness in machine learning which we term as the Renyi Fair Information Bottleneck (RFIB).

Classification Fairness +1

An Information Bottleneck Problem with Rényi's Entropy

no code implementations29 Jan 2021 Jian-Jia Weng, Fady Alajaji, Tamás Linder

This paper considers an information bottleneck problem with the objective of obtaining a most informative representation of a hidden feature subject to a R\'enyi entropy complexity constraint.

Information Theory Information Theory

TARA: Training and Representation Alteration for AI Fairness and Domain Generalization

no code implementations11 Dec 2020 William Paul, Armin Hadzic, Neil Joshi, Fady Alajaji, Phil Burlina

Our experiments also demonstrate the ability of these novel metrics in assessing the Pareto efficiency of the proposed methods.

Domain Generalization Fairness +1

Least $k$th-Order and Rényi Generative Adversarial Networks

no code implementations3 Jun 2020 Himesh Bhatia, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

Another novel GAN generator loss function is next proposed in terms of R\'{e}nyi cross-entropy functionals with order $\alpha >0$, $\alpha\neq 1$.

Fairness

Unsupervised Discovery, Control, and Disentanglement of Semantic Attributes with Applications to Anomaly Detection

no code implementations25 Feb 2020 William Paul, I-Jeng Wang, Fady Alajaji, Philippe Burlina

Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a).

Anomaly Detection Attribute +4

Capacity of Generalized Discrete-Memoryless Push-to-Talk Two-Way Channels

no code implementations2 Apr 2019 Jian-Jia Weng, Fady Alajaji, Tamás Linder

In this report, we generalize Shannon's push-to-talk two-way channel (PTT-TWC) by allowing reliable full-duplex transmission as well as noisy reception in the half-duplex (PTT) mode.

Information Theory Information Theory

Information Extraction Under Privacy Constraints

no code implementations7 Nov 2015 Shahab Asoodeh, Mario Diaz, Fady Alajaji, Tamás Linder

To this end, the so-called {\em rate-privacy function} is introduced to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from $Y$ under a privacy constraint between $X$ and the extracted information, where privacy is measured using either mutual information or maximal correlation.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.