no code implementations • 5 Oct 2023 • Omar Zamzam, Haleh Akrami, Mahdi Soltanolkotabi, Richard Leahy
In this paper we propose to learn a neural network-based data representation using a loss function that can be used to project the unlabeled data into two (positive and negative) clusters that can be easily identified using simple clustering techniques, effectively emulating the phenomenon observed in low-dimensional settings.
no code implementations • 14 Sep 2023 • Haleh Akrami, Omar Zamzam, Anand Joshi, Sergul Aydore, Richard Leahy
Outlier features can compromise the performance of deep learning regression problems such as style translation, image reconstruction, and deep anomaly detection, potentially leading to misleading conclusions.
no code implementations • 26 Aug 2022 • Omar Zamzam, Haleh Akrami, Richard Leahy
In our suggested method, the GAN discriminator instructs the generator only to produce samples that fall into the unlabeled data distribution, while a second classifier (observer) network monitors the GAN training to: (i) prevent the generated samples from falling into the positive distribution; and (ii) learn the features that are the key distinction between the positive and negative observations.