Search Results for author: Anoop Deoras

Found 12 papers, 1 papers with code

BASS: Batched Attention-optimized Speculative Sampling

no code implementations24 Apr 2024 Haifeng Qian, Sujan Kumar Gonugondla, Sungsoo Ha, Mingyue Shang, Sanjay Krishna Gouda, Ramesh Nallapati, Sudipta Sengupta, Xiaofei Ma, Anoop Deoras

Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models.

Fewer Truncations Improve Language Modeling

no code implementations16 Apr 2024 Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto

In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens.

Combinatorial Optimization Hallucination +4

Logic-Scaffolding: Personalized Aspect-Instructed Recommendation Explanation Generation using LLMs

no code implementations22 Dec 2023 Behnam Rahdari, Hao Ding, Ziwei Fan, Yifei Ma, Zhuotong Chen, Anoop Deoras, Branislav Kveton

The unique capabilities of Large Language Models (LLMs), such as the natural language text generation ability, position them as strong candidates for providing explanation for recommendations.

Explanation Generation Position +1

Pre-trained Recommender Systems: A Causal Debiasing Perspective

1 code implementation30 Oct 2023 Ziqian Lin, Hao Ding, Nghia Trong Hoang, Branislav Kveton, Anoop Deoras, Hao Wang

In particular, we propose to develop a generic recommender that captures universal interaction patterns by training on generic user-item interaction data extracted from different domains, which can then be fast adapted to improve few-shot learning performance in unseen new domains (with limited data).

Few-Shot Learning Recommendation Systems

Lightweight reranking for language model generations

no code implementations11 Jul 2023 Siddhartha Jain, Xiaofei Ma, Anoop Deoras, Bing Xiang

We show strong improvements for selecting the best k generations for code generation tasks as well as robust improvements for the best generation for the tasks of autoformalization, summarization, and translation.

Code Generation Language Modelling

Fixed-Budget Best-Arm Identification with Heterogeneous Reward Variances

no code implementations13 Jun 2023 Anusha Lalitha, Kousha Kalantari, Yifei Ma, Anoop Deoras, Branislav Kveton

Our algorithms rely on non-uniform budget allocations among the arms where the arms with higher reward variances are pulled more often than those with lower variances.

Personalized Federated Domain Adaptation for Item-to-Item Recommendation

no code implementations5 Jun 2023 Ziwei Fan, Hao Ding, Anoop Deoras, Trong Nghia Hoang

To mitigate this data bottleneck, we postulate that recommendation patterns learned from existing mature market segments (with private data) could be adapted to build effective warm-start models for emerging ones.

Domain Adaptation Personalized Federated Learning +1

Robust Projection based Anomaly Extraction (RPE) in Univariate Time-Series

no code implementations31 May 2022 Mostafa Rahmani, Anoop Deoras, Laurent Callot

This paper presents a novel, closed-form, and data/computation efficient online anomaly detection algorithm for time-series data.

Anomaly Detection Time Series +1

Learning Personalized Item-to-Item Recommendation Metric via Implicit Feedback

no code implementations18 Mar 2022 Trong Nghia Hoang, Anoop Deoras, Tong Zhao, Jin Li, George Karypis

We develop and investigate a personalizable deep metric model that captures both the internal contents of items and how they were interacted with by users.

Metric Learning Recommendation Systems

Bridging Recommendation and Marketing via Recurrent Intensity Modeling

no code implementations ICLR 2022 Yifei Ma, Ge Liu, Anoop Deoras

RIM allows us to rethink recommendation in a Matching (Mtch) scenario, where the benefits of the users (e. g., ItemRec relevance) and item providers (e. g., item-exposure guarantees) are considered at the same time.

Marketing

Language Models as Recommender Systems: Evaluations and Limitations

no code implementations NeurIPS Workshop ICBINB 2021 Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, Hao Wang

Pre-trained language models (PLMs) such as BERT and GPT learn general text representations and encode extensive world knowledge; thus, they can be efficiently and accurately adapted to various downstream tasks.

Movie Recommendation Session-Based Recommendations +1

Zero-Shot Recommender Systems

no code implementations18 May 2021 Hao Ding, Yifei Ma, Anoop Deoras, Yuyang Wang, Hao Wang

This poses a chicken-and-egg problem for early-stage products, whose amount of data, in turn, relies on the performance of their RS.

Recommendation Systems Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.