Search Results for author: Insup Lee

Found 49 papers, 19 papers with code

On the Calibration of Multilingual Question Answering LLMs

no code implementations15 Nov 2023 Yahan Yang, Soham Dan, Dan Roth, Insup Lee

We also conduct several ablation experiments to study the effect of language distances, language corpus size, and model size on calibration, and how multilingual models compare with their monolingual counterparts for diverse tasks and languages.

Cross-Lingual Transfer Data Augmentation +3

Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach

no code implementations13 Nov 2023 Xi Zheng, Aloysius K. Mok, Ruzica Piskac, Yong Jae Lee, Bhaskar Krishnamachari, Dakai Zhu, Oleg Sokolsky, Insup Lee

The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits, including enhanced efficiency, predictive capabilities, real-time responsiveness, and the enabling of autonomous operations.

Autonomous Vehicles

PAC Prediction Sets Under Label Shift

1 code implementation19 Oct 2023 Wenwen Si, Sangdon Park, Insup Lee, Edgar Dobriban, Osbert Bastani

We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.

Uncertainty Quantification

Memory-Consistent Neural Networks for Imitation Learning

no code implementations9 Oct 2023 Kaustubh Sridhar, Souradeep Dutta, Dinesh Jayaraman, James Weimer, Insup Lee

Imitation learning considerably simplifies policy synthesis compared to alternative approaches by exploiting access to expert demonstrations.

Imitation Learning

IBCL: Zero-shot Model Generation for Task Trade-offs in Continual Learning

1 code implementation4 Oct 2023 Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee

Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.

Continual Learning Image Classification +1

A Novel Bayes' Theorem for Upper Probabilities

no code implementations13 Jul 2023 Michele Caprio, Yusuf Sale, Eyke Hüllermeier, Insup Lee

In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$, when the prior lies in a class of probability measures $\mathcal{P}$ and the likelihood is precise.

Model Predictive Control

TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction

1 code implementation7 Jul 2023 Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani

To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or $\textit{TRAQ}$, which provides the first end-to-end statistical correctness guarantee for RAG.

Bayesian Optimization Chatbot +4

IBCL: Zero-shot Model Generation for Task Trade-offs in Continual Learning

1 code implementation24 May 2023 Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee

Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.

Continual Learning Image Classification +1

Fulfilling Formal Specifications ASAP by Model-free Reinforcement Learning

no code implementations25 Apr 2023 Mengyu Liu, Pengyuan Lu, Xin Chen, Fanxin Kong, Oleg Sokolsky, Insup Lee

We propose a model-free reinforcement learning solution, namely the ASAP-Phi framework, to encourage an agent to fulfill a formal specification ASAP.

reinforcement-learning

Causal Repair of Learning-enabled Cyber-physical Systems

no code implementations6 Apr 2023 Pengyuan Lu, Ivan Ruchkin, Matthew Cleaveland, Oleg Sokolsky, Insup Lee

However, given the high diversity and complexity of LECs, it is challenging to encode domain knowledge (e. g., the CPS dynamics) in a scalable actual causality model that could generate useful repair suggestions.

counterfactual OpenAI Gym

Conformal Prediction Regions for Time Series using Linear Complementarity Programming

1 code implementation3 Apr 2023 Matthew Cleaveland, Insup Lee, George J. Pappas, Lars Lindemann

In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$.

Conformal Prediction Time Series +1

Using Semantic Information for Defining and Detecting OOD Inputs

no code implementations21 Feb 2023 Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, Insup Lee

This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e. g. training class labels).

Anomaly Detection Out of Distribution (OOD) Detection

Credal Bayesian Deep Learning

no code implementations19 Feb 2023 Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee

We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging.

Autonomous Driving motion prediction +1

In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

no code implementations20 Dec 2022 Yahan Yang, Soham Dan, Dan Roth, Insup Lee

Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions).

Adversarial Robustness

Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

1 code implementation2 Dec 2022 Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee

Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network in each subset.

CODiT: Conformal Out-of-Distribution Detection in Time-Series Data

1 code implementation24 Jul 2022 Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee

Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.

Anomaly Detection Autonomous Driving +6

PAC Prediction Sets for Meta-Learning

no code implementations6 Jul 2022 Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani

Uncertainty quantification is a key component of machine learning models targeted at safety-critical systems such as in healthcare or autonomous vehicles.

Autonomous Vehicles Meta-Learning +1

Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations

1 code implementation13 Jun 2022 Kaustubh Sridhar, Souradeep Dutta, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee

Algorithm design of AT and its variants are focused on training models at a specified perturbation strength $\epsilon$ and only using the feedback from the performance of that $\epsilon$-robust model to improve the algorithm.

Adversarial Robustness Quantization

Memory Classifiers: Two-stage Classification for Robustness in Machine Learning

no code implementations10 Jun 2022 Souradeep Dutta, Yahan Yang, Elena Bernardis, Edgar Dobriban, Insup Lee

We propose a new method for classification which can improve robustness to distribution shifts, by combining expert knowledge about the ``high-level" structure of the data with standard classifiers.

BIG-bench Machine Learning Classification +3

PAC-Wrap: Semi-Supervised PAC Anomaly Detection

no code implementations22 May 2022 Shuo Li, Xiayan Ji, Edgar Dobriban, Oleg Sokolsky, Insup Lee

Anomaly detection is essential for preventing hazardous outcomes for safety-critical applications like autonomous driving.

Autonomous Driving Unsupervised Anomaly Detection

Towards PAC Multi-Object Detection and Tracking

no code implementations15 Apr 2022 Shuo Li, Sangdon Park, Xiayan Ji, Insup Lee, Osbert Bastani

Accurately detecting and tracking multi-objects is important for safety-critical applications such as autonomous navigation.

Autonomous Navigation Conformal Prediction +3

Confidence Composition for Monitors of Verification Assumptions

1 code implementation3 Nov 2021 Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee

To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions.

Mako: Semi-supervised continual learning with minimal labeled data via data programming

no code implementations29 Sep 2021 Pengyuan Lu, Seungwon Lee, Amanda Watson, David Kent, Insup Lee, Eric Eaton, James Weimer

This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data.

Continual Learning Image Classification

Sequential Covariate Shift Detection Using Classifier Two-Sample Tests

no code implementations29 Sep 2021 Sooyong Jang, Sangdon Park, Insup Lee, Osbert Bastani

This problem can naturally be solved using a two-sample test--- i. e., test whether the current test distribution of covariates equals the training distribution of covariates.

Vocal Bursts Valence Prediction

Detecting OODs as datapoints with High Uncertainty

no code implementations13 Aug 2021 Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee

We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).

Autonomous Driving Management +2

PAC Prediction Sets Under Covariate Shift

1 code implementation ICLR 2022 Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani

Our approach focuses on the setting where there is a covariate shift from the source distribution (where we have labeled training examples) to the target distribution (for which we want to quantify uncertainty).

Uncertainty Quantification

ModelGuard: Runtime Validation of Lipschitz-continuous Models

no code implementations30 Apr 2021 Taylor J. Carpenter, Radoslav Ivanov, Insup Lee, James Weimer

This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models.

Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs

no code implementations23 Mar 2021 Ramneet Kaur, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee

Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs.

Autonomous Driving Management +1

Confidence Calibration with Bounded Error Using Transformations

no code implementations25 Feb 2021 Sooyong Jang, Radoslav Ivanov, Insup Lee, James Weimer

As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation.

Autonomous Vehicles

Improving Classifier Confidence using Lossy Label-Invariant Transformations

no code implementations9 Nov 2020 Sooyong Jang, Insup Lee, James Weimer

Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike.

Decision Making

PAC Confidence Predictions for Deep Neural Network Classifiers

no code implementations ICLR 2021 Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani

In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.

A Skew-Sensitive Evaluation Framework for Imbalanced Data Classification

1 code implementation12 Oct 2020 Min Du, Nesime Tatbul, Brian Rivers, Akhilesh Kumar Gupta, Lucas Hu, Wei Wang, Ryan Marcus, Shengtian Zhou, Insup Lee, Justin Gottschlich

Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task.

Classification General Classification

Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation

no code implementations29 Feb 2020 Sangdon Park, Osbert Bastani, James Weimer, Insup Lee

Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution.

Unsupervised Domain Adaptation

Real-Time Detectors for Digital and Physical Adversarial Inputs to Perception Systems

no code implementations23 Feb 2020 Yiannis Kantaros, Taylor Carpenter, Kaustubh Sridhar, Yahan Yang, Insup Lee, James Weimer

To highlight this, we demonstrate the efficiency of the proposed detector on ImageNet, a task that is computationally challenging for the majority of relevant defenses, and on physically attacked traffic signs that may be encountered in real-time autonomy applications.

PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction

1 code implementation ICLR 2020 Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee

We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i. e., the confidence set for a given input contains the true label with high probability.

Generalization Bounds Learning Theory +3

Reinforcement Learning for Temporal Logic Control Synthesis with Probabilistic Satisfaction Guarantees

1 code implementation11 Sep 2019 Mohammadhosein Hasanbeig, Yiannis Kantaros, Alessandro Abate, Daniel Kroening, George J. Pappas, Insup Lee

Reinforcement Learning (RL) has emerged as an efficient method of choice for solving complex sequential decision making problems in automatic control, computer science, economics, and biology.

Decision Making Decision Making Under Uncertainty +4

Verisig: verifying safety properties of hybrid systems with neural network controllers

1 code implementation5 Nov 2018 Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee

This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.

Systems and Control

Resilient Linear Classification: An Approach to Deal with Attacks on Training Data

no code implementations10 Aug 2017 Sangdon Park, James Weimer, Insup Lee

Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.

Autonomous Vehicles Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.