Search Results for author: Sangdon Park

Found 18 papers, 7 papers with code

MedBN: Robust Test-Time Adaptation against Malicious Test Samples

no code implementations28 Mar 2024 Hyejin Park, Jeongyeon Hwang, Sunung Mun, Sangdon Park, Jungseul Ok

In response to the emerging threat, we propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.

Test-time Adaptation

PAC Prediction Sets Under Label Shift

1 code implementation19 Oct 2023 Wenwen Si, Sangdon Park, Insup Lee, Edgar Dobriban, Osbert Bastani

We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.

Uncertainty Quantification

PAC Neural Prediction Set Learning to Quantify the Uncertainty of Generative Language Models

no code implementations18 Jul 2023 Sangdon Park, Taesoo Kim

Uncertainty learning and quantification of models are crucial tasks to enhance the trustworthiness of the models.

Uncertainty Quantification

TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction

1 code implementation7 Jul 2023 Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani

To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or $\textit{TRAQ}$, which provides the first end-to-end statistical correctness guarantee for RAG.

Bayesian Optimization Chatbot +4

ACon$^2$: Adaptive Conformal Consensus for Provable Blockchain Oracles

1 code implementation17 Nov 2022 Sangdon Park, Osbert Bastani, Taesoo Kim

To address the oracle problem, we propose an adaptive conformal consensus (ACon$^2$) algorithm that derives a consensus set of data from multiple oracle contracts via the recent advance in online uncertainty quantification learning.

Uncertainty Quantification

CODiT: Conformal Out-of-Distribution Detection in Time-Series Data

1 code implementation24 Jul 2022 Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee

Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.

Anomaly Detection Autonomous Driving +6

PAC Prediction Sets for Meta-Learning

no code implementations6 Jul 2022 Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani

Uncertainty quantification is a key component of machine learning models targeted at safety-critical systems such as in healthcare or autonomous vehicles.

Autonomous Vehicles Meta-Learning +1

Towards PAC Multi-Object Detection and Tracking

no code implementations15 Apr 2022 Shuo Li, Sangdon Park, Xiayan Ji, Insup Lee, Osbert Bastani

Accurately detecting and tracking multi-objects is important for safety-critical applications such as autonomous navigation.

Autonomous Navigation Conformal Prediction +3

Sequential Covariate Shift Detection Using Classifier Two-Sample Tests

no code implementations29 Sep 2021 Sooyong Jang, Sangdon Park, Insup Lee, Osbert Bastani

This problem can naturally be solved using a two-sample test--- i. e., test whether the current test distribution of covariates equals the training distribution of covariates.

Vocal Bursts Valence Prediction

Detecting OODs as datapoints with High Uncertainty

no code implementations13 Aug 2021 Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee

We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).

Autonomous Driving Management +2

PAC Prediction Sets Under Covariate Shift

1 code implementation ICLR 2022 Sangdon Park, Edgar Dobriban, Insup Lee, Osbert Bastani

Our approach focuses on the setting where there is a covariate shift from the source distribution (where we have labeled training examples) to the target distribution (for which we want to quantify uncertainty).

Uncertainty Quantification

PAC Confidence Predictions for Deep Neural Network Classifiers

no code implementations ICLR 2021 Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani

In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs.

Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation

no code implementations29 Feb 2020 Sangdon Park, Osbert Bastani, James Weimer, Insup Lee

Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution.

Unsupervised Domain Adaptation

PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction

1 code implementation ICLR 2020 Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee

We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i. e., the confidence set for a given input contains the true label with high probability.

Generalization Bounds Learning Theory +3

Resilient Linear Classification: An Approach to Deal with Attacks on Training Data

no code implementations10 Aug 2017 Sangdon Park, James Weimer, Insup Lee

Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.

Autonomous Vehicles Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.