no code implementations • 23 Mar 2024 • Aakash Lahoti, Stefani Karp, Ezra Winston, Aarti Singh, Yuanzhi Li
Vision tasks are characterized by the properties of locality and translation invariance.
no code implementations • 11 Jul 2023 • Zhili Feng, Ezra Winston, J. Zico Kolter
Deep Boltzmann machines (DBMs), one of the first ``deep'' learning methods ever studied, are multi-layered probabilistic models governed by a pairwise energy function that describes the likelihood of all variables/nodes in the network.
1 code implementation • NeurIPS 2021 • Stefani Karp, Ezra Winston, Yuanzhi Li, Aarti Singh
We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.
no code implementations • ICLR 2021 • Chirag Pabbaraju, Ezra Winston, J Zico Kolter
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.
1 code implementation • NeurIPS 2020 • Ezra Winston, J. Zico Kolter
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
no code implementations • ICML 2020 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.
no code implementations • 25 Sep 2019 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.
1 code implementation • ICLR Workshop LLD 2019 • Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton
Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.