Search Results for author: Ethan R. Elenberg

Found 12 papers, 6 papers with code

Multi-Step Dialogue Workflow Action Prediction

no code implementations16 Nov 2023 Ramya Ramakrishnan, Ethan R. Elenberg, Hashan Narangodage, Ryan Mcdonald

In task-oriented dialogue, a system often needs to follow a sequence of actions, called a workflow, that complies with a set of guidelines in order to complete a task.

In-Context Learning Language Modelling +2

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

1 code implementation16 Nov 2023 Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

Further, we experiment with two variations: (1) fine-tuning gist models for each dataset and (2) multi-task training a single model on a large collection of datasets.

In-Context Learning

Submodular Minimax Optimization: Finding Effective Sets

no code implementations26 May 2023 Loay Mualem, Ethan R. Elenberg, Moran Feldman, Amin Karbasi

Despite the rich existing literature about minimax optimization in continuous settings, only very partial results of this kind have been obtained for combinatorial settings.

dialog state tracking Prompt Engineering +1

Domain Private Transformers for Multi-Domain Dialog Systems

1 code implementation23 May 2023 Anmol Kabra, Ethan R. Elenberg

Large, general purpose language models have demonstrated impressive performance across many different conversational domains.

domain classification Language Modelling

CEREAL: Few-Sample Clustering Evaluation

no code implementations30 Sep 2022 Nihal V. Nayak, Ethan R. Elenberg, Clemens Rosenbaum

We adapt existing approaches from the few-sample model evaluation literature to actively sub-sample, with a learned surrogate model, the most informative data points for annotation to estimate the evaluation metric.

Clustering Pseudo Label

Identifying Mislabeled Data using the Area Under the Margin Ranking

2 code implementations NeurIPS 2020 Geoff Pleiss, Tianyi Zhang, Ethan R. Elenberg, Kilian Q. Weinberger

Not all data in a typical training set help with generalization; some samples can be overly ambiguous or outrightly mislabeled.

Detecting Noisy Training Data with Loss Curves

no code implementations25 Sep 2019 Geoff Pleiss, Tianyi Zhang, Ethan R. Elenberg, Kilian Q. Weinberger

This paper introduces a new method to discover mislabeled training samples and to mitigate their impact on the training process of deep networks.

Importance Weighted Generative Networks

no code implementations7 Jun 2018 Maurice Diesendruck, Ethan R. Elenberg, Rajat Sen, Guy W. Cole, Sanjay Shakkottai, Sinead A. Williamson

Deep generative networks can simulate from a complex target distribution, by minimizing a loss with respect to samples from that distribution.

Selection bias

Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

1 code implementation NeurIPS 2017 Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi

In many machine learning applications, it is important to explain the predictions of a black-box classifier.

Restricted Strong Convexity Implies Weak Submodularity

no code implementations2 Dec 2016 Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban

Our results extend the work of Das and Kempe (2011) from the setting of linear regression to arbitrary objective functions.

feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.