1 code implementation • ICML 2020 • Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik
The policy neural network employs a program interpreter that provides immediate feedback on the consequences of the decisions made by the policy, and also takes into account the uncertainty in the symbolic representation of the image.
no code implementations • 6 Mar 2024 • Anna P. Meyer, Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Our empirical evaluation demonstrates that VeriTraCER generates CEs that (1) are verifiably robust to small model updates and (2) display competitive robustness to state-of-the-art approaches in handling empirical model updates including random initialization, leave-one-out, and distribution shifts.
1 code implementation • 20 Apr 2023 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
We introduce dataset multiplicity, a way to study how inaccuracies, uncertainty, and social bias in training datasets impact test-time predictions.
no code implementations • 27 Jan 2023 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model.
no code implementations • 30 Aug 2022 • Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala
While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features.
no code implementations • 7 Jun 2022 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
Datasets typically contain inaccuracies due to human error and societal biases, and these inaccuracies can affect the outcomes of models trained on such datasets.
1 code implementation • 26 May 2022 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model.
no code implementations • NeurIPS 2021 • Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni
To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point.
no code implementations • 21 Sep 2021 • Aws Albarghouthi
Deep learning has transformed the way we think of software and what it can do.
1 code implementation • EMNLP 2021 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Deep neural networks for natural language processing are fragile in the face of adversarial examples -- small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction.
no code implementations • 4 Jan 2021 • Subhajit Roy, Justin Hsu, Aws Albarghouthi
We demonstrate that our approach is able to learn foundational algorithms from the differential privacy literature and significantly outperforms natural program synthesis baselines.
no code implementations • 1 Jan 2021 • Zi Wang, Aws Albarghouthi, Somesh Jha
To certify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using interval bound propagation.
no code implementations • 12 Jul 2020 • Zi Wang, Aws Albarghouthi, Gautam Prakriya, Somesh Jha
This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain.
no code implementations • 11 Jun 2020 • Goutham Ramakrishnan, Aws Albarghouthi
Deep neural networks are vulnerable to a range of adversaries.
1 code implementation • ICML 2020 • Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
We then present an approach to adversarially training models that are robust to such user-defined string transformations.
1 code implementation • 7 Feb 2020 • Goutham Ramakrishnan, Jordan Henkel, Zi Wang, Aws Albarghouthi, Somesh Jha, Thomas Reps
Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions.
no code implementations • 2 Dec 2019 • Samuel Drews, Aws Albarghouthi, Loris D'Antoni
Machine learning models are brittle, and small changes in the training data can result in different predictions.
1 code implementation • 30 Sep 2019 • Goutham Ramakrishnan, Yun Chan Lee, Aws Albarghouthi
When a model makes a consequential decision, e. g., denying someone a loan, it needs to additionally generate actionable, realistic feedback on what the person can do to favorably change the decision.
no code implementations • 18 Jun 2019 • Newsha Ardalani, Urmish Thakker, Aws Albarghouthi, Karu Sankaralingam
Porting code from CPU to GPU is costly and time-consuming; Unless much time is invested in development and optimization, it is not obvious, a priori, how much speed-up is achievable or how much room is left for improvement.
no code implementations • 11 Sep 2018 • Jinman Zhao, Aws Albarghouthi, Vaibhav Rastogi, Somesh Jha, Damien Octeau
We address the problem of discovering communication links between applications in the popular Android mobile operating system, an important problem for security and privacy in Android.
no code implementations • 17 Feb 2017 • Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori
With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we aggressively investigate whether programs are biased.
no code implementations • 19 Oct 2016 • Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori
We explore the following question: Is a decision-making program fair, for some useful definition of fairness?