no code implementations • 12 May 2024 • Nitya Sathyavageeswaran, Roy D. Yates, Anand D. Sarwate, Narayan Mandayam
However, using the MEC may come at a cost such as related to use of a cloud service or privacy.
no code implementations • 1 Oct 2023 • Sinjini Banerjee, Reilly Cannon, Tim Marrinan, Tony Chiang, Anand D. Sarwate
Training a deep neural network (DNN) often involves stochastic optimization, which means each run will produce a different model.
no code implementations • 5 Aug 2023 • Batoul Taki, Anand D. Sarwate, Waheed U. Bajwa
This result can also be specialised to lower bound the estimation error in CP and Tucker-structured GLMs.
no code implementations • 26 May 2023 • Eric Silk, Swarnita Chakraborty, Nairanjana Dasgupta, Anand D. Sarwate, Andrew Lumsdaine, Tony Chiang
Training deep neural networks (DNNs) used in modern machine learning is computationally expensive.
no code implementations • 31 May 2022 • Nitya Sathyavageesran, Roy D. Yates, Anand D. Sarwate, Narayan Mandayam
We analyze the trade-off between the age of information (AoI) and the maximal leakage for systems in which the source generates updates as a Bernoulli process.
no code implementations • 24 May 2022 • Andrew Engel, Zhichao Wang, Anand D. Sarwate, Sutanay Choudhury, Tony Chiang
We introduce torchNTK, a python library to calculate the empirical neural tangent kernel (NTK) of neural network models in the PyTorch framework.
no code implementations • 15 Feb 2022 • Soo Min Kwon, Xin Li, Anand D. Sarwate
We study the low-rank phase retrieval problem, where the objective is to recover a sequence of signals (typically images) given the magnitude of linear measurements of those signals.
no code implementations • 29 Nov 2021 • Sijie Xiong, Anand D. Sarwate, Narayan B. Mandayam
We show that in special cases the proposed mechanism recovers existing shapers which standardize the output independently from the input.
no code implementations • 31 May 2021 • Batoul Taki, Mohsen Ghassemi, Anand D. Sarwate, Waheed U. Bajwa
This paper considers the problem of matrix-variate logistic regression.
no code implementations • 22 Dec 2020 • Aria Rezaei, Jie Gao, Anand D. Sarwate
Experiments demonstrate that a giant connected component of infected nodes can and does appear in real-world networks and that a simple inference attack can reveal the status of a good fraction of nodes.
Inference Attack Social and Information Networks
no code implementations • 11 Jun 2020 • Kontantinos E. Nikolakakis, Dionysios S. Kalogerias, Or Sheffet, Anand D. Sarwate
First, we propose a (non-private) successive elimination algorithm for strictly optimal best-arm identification, we show that our algorithm is $\delta$-PAC and we characterize its sample complexity.
no code implementations • 28 Oct 2019 • Hafiz Imtiaz, Jafar Mohammadi, Rogers Silva, Bradley Baker, Sergey M. Plis, Anand D. Sarwate, Vince Calhoun
In this work, we propose a differentially private algorithm for performing ICA in a decentralized data setting.
no code implementations • 20 Sep 2019 • Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate
Specifically, we show that the finite sample complexity of the Chow-Liu algorithm for ensuring exact structure recovery from noisy data is inversely proportional to the information threshold squared (provided it is positive), and scales almost logarithmically relative to the number of nodes over a given probability of failure.
2 code implementations • 22 Apr 2019 • Hafiz Imtiaz, Jafar Mohammadi, Anand D. Sarwate
CAPE can be used in conjunction with the functional mechanism for statistical and machine learning optimization problems.
1 code implementation • 22 Mar 2019 • Mohsen Ghassemi, Zahra Shakeri, Anand D. Sarwate, Waheed U. Bajwa
This work addresses the problem of learning sparse representations of tensor data using structured dictionary learning.
no code implementations • 11 Dec 2018 • Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate
In the absence of noise, predictive learning on Ising models was recently studied by Bresler and Karzand (2020); this paper quantifies how noise in the hidden model impacts the tasks of structure recovery and marginal distribution estimation by proving upper and lower bounds on the sample complexity.
no code implementations • 26 Apr 2018 • Hafiz Imtiaz, Anand D. Sarwate
Tensor and matrix factorizations are key components of many processing pipelines.
no code implementations • 10 Dec 2017 • Zahra Shakeri, Anand D. Sarwate, Waheed U. Bajwa
This paper derives sufficient conditions for local recovery of coordinate dictionaries comprising a Kronecker-structured dictionary that is used for representing $K$th-order tensor data.
no code implementations • 13 Nov 2017 • Mohsen Ghassemi, Zahra Shakeri, Anand D. Sarwate, Waheed U. Bajwa
In recent years, a class of dictionaries have been proposed for multidimensional (tensor) data representation that exploit the structure of tensor data by imposing a Kronecker structure on the dictionary underlying the data.
no code implementations • 17 May 2016 • Zahra Shakeri, Waheed U. Bajwa, Anand D. Sarwate
This paper finds fundamental limits on the sample complexity of estimating dictionaries for tensor data by proving a lower bound on the minimax risk.
no code implementations • 10 Feb 2016 • Tamir Hazan, Francesco Orabona, Anand D. Sarwate, Subhransu Maji, Tommi Jaakkola
This paper shows that the expected value of perturb-max inference with low dimensional perturbations can be used sequentially to generate unbiased samples from the Gibbs distribution.
no code implementations • 17 Dec 2014 • Shuang Song, Kamalika Chaudhuri, Anand D. Sarwate
In this paper, we adopt instead a model in which data is observed through heterogeneous noise, where the noise level reflects the quality of the data source.
no code implementations • 15 Oct 2013 • Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi Jaakkola
Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution.
no code implementations • NeurIPS 2013 • Sivan Sabato, Anand D. Sarwate, Nathan Srebro
We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error.
no code implementations • 12 Jul 2012 • Kamalika Chaudhuri, Anand D. Sarwate, Kaushik Sinha
In this paper we investigate the theory and empirical performance of differentially private approximations to PCA and propose a new method which explicitly optimizes the utility of the output.