no code implementations • 15 Dec 2023 • Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, Manzil Zaheer, Felix Yu, Sanjiv Kumar
Answering complex natural language questions often necessitates multi-step reasoning and integrating external information.
Ranked #1 on Question Answering on Bamboogle
no code implementations • 29 Nov 2023 • Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, Denny Zhou
Self-consistency with chain-of-thought prompting (CoT) has demonstrated remarkable performance gains on various challenging tasks, by utilizing multiple reasoning paths sampled from large language models (LLMs).
no code implementations • 13 Nov 2023 • Abdullatif Köksal, Renat Aksitov, Chung-Ching Chang
For open book QA as a case study, we demonstrate that models finetuned with our counterfactual datasets improve text grounding, leading to better open book QA performance, with up to an 8. 0% increase in F1 score.
2 code implementations • 2 Jun 2023 • Chung-Ching Chang, David Reitter, Renat Aksitov, Yun-Hsuan Sung
One common approach to mitigate hallucinations is to provide source/grounding documents and the model is trained to produce predictions that bind to and are attributable to the provided source.
no code implementations • 23 May 2023 • Raghav Gupta, Renat Aksitov, Samrat Phatale, Simral Chaudhary, Harrison Lee, Abhinav Rastogi
Conversational recommendation systems (CRS) aim to recommend suitable items to users through natural language conversation.
no code implementations • 11 Feb 2023 • Renat Aksitov, Chung-Ching Chang, David Reitter, Siamak Shakeri, YunHsuan Sung
One common solution to this is augmenting LLMs with a retrieval system and making sure that the generated output is attributable to the retrieved information.