Search Results for author: Sercan O Arik

Found 5 papers, 2 papers with code

Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning

1 code implementation15 Apr 2024 Sungwon Han, Jinsung Yoon, Sercan O Arik, Tomas Pfister

The proposed FeatLLM framework only uses this simple predictive model with the discovered features at inference time.

Few-Shot Learning In-Context Learning

TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents

no code implementations3 Dec 2023 James Enouen, Hootan Nakhost, Sayna Ebrahimi, Sercan O Arik, Yan Liu, Tomas Pfister

Given their nature as black-boxes using complex reasoning processes on their inputs, it is inevitable that the demand for scalable and faithful explanations for LLMs' generated content will continue to grow.

Question Answering Text Generation

Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs

no code implementations18 Oct 2023 Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, Somesh Jha

Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation.

Decision Making Natural Language Understanding +1

Search-Adaptor: Embedding Customization for Information Retrieval

no code implementations12 Oct 2023 Jinsung Yoon, Sercan O Arik, Yanfei Chen, Tomas Pfister

Embeddings extracted by pre-trained Large Language Models (LLMs) have significant potential to improve information retrieval and search.

Information Retrieval Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.