Search Results for author: Jeremiah Zhe Liu

Found 13 papers, 4 papers with code

Towards Collaborative Neural-Symbolic Graph Semantic Parsing via Uncertainty

no code implementations Findings (ACL) 2022 Zi Lin, Jeremiah Zhe Liu, Jingbo Shang

Recent work in task-independent graph semantic parsing has shifted from grammar-based symbolic approaches to neural models, showing strong performance on different types of meaning representations.

Semantic Parsing

A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models

no code implementations13 Feb 2023 James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan

In particular, we ask "Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?".

Prompt Engineering Zero-Shot Learning

Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play

no code implementations11 Feb 2023 Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran

Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data.

Active Learning Fairness

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness

2 code implementations1 May 2022 Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan

The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.

Data Augmentation Probabilistic Deep Learning +1

Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees

no code implementations15 Apr 2022 Wenying Deng, Beau Coker, Rajarshi Mukherjee, Jeremiah Zhe Liu, Brent A. Coull

We develop a simple and unified framework for nonlinear variable selection that incorporates uncertainty in the prediction function and is compatible with a wide range of machine learning models (e. g., tree ensembles, kernel methods, neural networks, etc).

Variable Selection

Training independent subnetworks for robust prediction

2 code implementations ICLR 2021 Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran

Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.

Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior

1 code implementation Findings of the Association for Computational Linguistics 2020 Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth

Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.

Variable Selection with Rigorous Uncertainty Quantification using Deep Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises Phenomenon

no code implementations3 Dec 2019 Jeremiah Zhe Liu

(2) BNN's uncertainty quantification for variable importance is rigorous, in the sense that its 95% credible intervals for variable importance indeed covers the truth 95% of the time (i. e., the Bernstein-von Mises (BvM) phenomenon).

Uncertainty Quantification Variable Selection

Accurate Uncertainty Estimation and Decomposition in Ensemble Learning

no code implementations NeurIPS 2019 Jeremiah Zhe Liu, John Paisley, Marianthi-Anna Kioumourtzoglou, Brent Coull

We introduce a Bayesian nonparametric ensemble (BNE) approach that augments an existing ensemble model to account for different sources of model uncertainty.

Bias Detection Ensemble Learning +1

Gaussian Process Regression and Classification under Mathematical Constraints with Learning Guarantees

no code implementations21 Apr 2019 Jeremiah Zhe Liu

Furthermore, we show that CGP inherents the optimal theoretical properties of the Gaussian process, e. g. rates of posterior contraction, due to the fact that CGP is an Gaussian process with a more efficient model space.

General Classification regression

Robust Hypothesis Test for Nonlinear Effect with Gaussian Processes

no code implementations NeurIPS 2017 Jeremiah Zhe Liu, Brent Coull

Utilizing the theory of reproducing kernels, we reduce this hypothesis to a simple one-sided score test for a scalar parameter, develop a testing procedure that is robust against the mis-specification of kernel functions, and also propose an ensemble-based estimator for the null model to guarantee test performance in small samples.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.