Search Results for author: Vipula Rawte

Found 8 papers, 2 papers with code

A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

1 code implementation2 Jan 2024 S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, Amitava Das

As Large Language Models (LLMs) continue to advance in their ability to write human-like text, a key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded.

Hallucination Retrieval +1

The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations

no code implementations8 Oct 2023 Vipula Rawte, Swagata Chakraborty, Agnibh Pathak, Anubhav Sarkar, S. M Towhidul Islam Tonmoy, Aman Chadha, Amit P. Sheth, Amitava Das

Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI).

Hallucination

A Survey of Hallucination in Large Foundation Models

1 code implementation12 Sep 2023 Vipula Rawte, Amit Sheth, Amitava Das

Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.