Search Results for author: Jiachang Liu

Found 10 papers, 7 papers with code

What Makes Good In-Context Examples for GPT-3?

no code implementations DeeLIO (ACL) 2022 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt.

In-Context Learning Natural Language Understanding +4

Fast and Interpretable Mortality Risk Scores for Critical Care Patients

1 code implementation21 Nov 2023 Chloe Qinyu Zhu, Muhang Tian, Lesia Semenova, Jiachang Liu, Jack Xu, Joseph Scarpa, Cynthia Rudin

Both of these have disadvantages: black box models are unacceptable for use in hospitals, whereas manual creation of models (including hand-tuning of logistic regression parameters) relies on humans to perform high-dimensional constrained optimization, which leads to a loss in performance.

Syntax Tree Constrained Graph Network for Visual Question Answering

no code implementations17 Sep 2023 Xiangrui Su, Qi Zhang, Chongyang Shi, Jiachang Liu, Liang Hu

Existing VQA methods integrate vision modeling and language understanding to explore the deep semantics of the question.

Question Answering Visual Question Answering

Causal Intervention for Abstractive Related Work Generation

no code implementations23 May 2023 Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang

Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research.

Sentence

OKRidge: Scalable Optimal k-Sparse Ridge Regression

1 code implementation NeurIPS 2023 Jiachang Liu, Sam Rosen, Chudi Zhong, Cynthia Rudin

We consider an important problem in scientific discovery, namely identifying sparse governing equations for nonlinear dynamical systems.

regression

Exploring and Interacting with the Set of Good Sparse Generalized Additive Models

1 code implementation NeurIPS 2023 Chudi Zhong, Zhi Chen, Jiachang Liu, Margo Seltzer, Cynthia Rudin

In real applications, interaction between machine learning models and domain experts is critical; however, the classical machine learning paradigm that usually produces only a single model does not facilitate such interaction.

Additive models

FasterRisk: Fast and Accurate Interpretable Risk Scores

1 code implementation12 Oct 2022 Jiachang Liu, Chudi Zhong, Boxuan Li, Margo Seltzer, Cynthia Rudin

Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm.

Fast Sparse Classification for Generalized Linear and Additive Models

2 code implementations23 Feb 2022 Jiachang Liu, Chudi Zhong, Margo Seltzer, Cynthia Rudin

For fast sparse logistic regression, our computational speed-up over other best-subset search techniques owes to linear and quadratic surrogate cuts for the logistic loss that allow us to efficiently screen features for elimination, as well as use of a priority queue that favors a more uniform exploration of features.

Additive models Classification

What Makes Good In-Context Examples for GPT-$3$?

3 code implementations17 Jan 2021 Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen

Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.

Few-Shot Learning Natural Language Understanding +4

Cannot find the paper you are looking for? You can Submit a new open access paper.