no code implementations • 8 Dec 2023 • Shuai Tang, Zhiwei Steven Wu, Sergul Aydore, Michael Kearns, Aaron Roth
Our proposed MI attack learns quantile regression models that predict (a quantile of) the distribution of reconstruction loss on examples not used in training.
no code implementations • 14 Sep 2023 • Haleh Akrami, Omar Zamzam, Anand Joshi, Sergul Aydore, Richard Leahy
Outlier features can compromise the performance of deep learning regression problems such as style translation, image reconstruction, and deep anomaly detection, potentially leading to misleading conclusions.
2 code implementations • 6 Mar 2023 • Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu
We revisit the problem of differentially private squared error linear regression.
no code implementations • 15 Sep 2022 • Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu
A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.
1 code implementation • 20 Sep 2021 • Haleh Akrami, Anand Joshi, Sergul Aydore, Richard Leahy
Here we address the problem of quantifying uncertainty in the images that are reconstructed by the VAE as the basis for principled outlier or lesion detection.
1 code implementation • 11 Mar 2021 • Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit Siva
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy.
1 code implementation • NeurIPS 2021 • Ecenaz Erdemir, Jeffrey Bickford, Luca Melis, Sergul Aydore
Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors.
no code implementations • 18 Oct 2020 • Haleh Akrami, Anand A. Joshi, Sergul Aydore, Richard M. Leahy
Using estimated quantiles to compute mean and variance under the Gaussian assumption, we compute reconstruction probability as a principled approach to outlier or anomaly detection.
no code implementations • 15 Jun 2020 • Haleh Akrami, Sergul Aydore, Richard M. Leahy, Anand A. Joshi
The source of outliers in training data include the data collection process itself (random noise) or a malicious attacker (data poisoning) who may target to degrade the performance of the machine learning model.
1 code implementation • 7 Feb 2020 • Liyan Chen, Philip Gautier, Sergul Aydore
Dropout as a regularizer in deep neural networks has been less effective in convolutional layers than in fully connected layers.
1 code implementation • NeurIPS 2019 • Sergul Aydore, Tianhao Zhu, Dean Foster
We introduce a local regret for non-convex models in a dynamic environment.
no code implementations • 2 Oct 2019 • Bingyang Wen, Sergul Aydore
It protects the copyright of digital content by embedding imperceptible information into the data in the presence of an adversary.
no code implementations • 23 May 2019 • Haleh Akrami, Anand A. Joshi, Jian Li, Sergul Aydore, Richard M. Leahy
Machine learning methods often need a large amount of labeled training data.
no code implementations • 21 May 2019 • Tianhao Zhu, Sergul Aydore
Here, we study different update rules in stochastic gradient descent (SGD) for online forecasting problems.
no code implementations • 13 Nov 2018 • Sergul Aydore, Lee Dicker, Dean Foster
We consider an online learning process to forecast a sequence of outcomes for nonconvex models.
1 code implementation • 31 Jul 2018 • Sergul Aydore, Bertrand Thirion, Gael Varoquaux
In many applications where collecting data is expensive, for example neuroscience or medical imaging, the sample size is typically small compared to the feature dimension.