no code implementations • 28 Mar 2024 • Vipula Rawte, S. M Towhidul Islam Tonmoy, Krishnav Rajbangshi, Shravani Nag, Aman Chadha, Amit P. Sheth, Amitava Das
We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE.
no code implementations • 27 Mar 2024 • Vipula Rawte, S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Prachi Priya, Aman Chadha, Amit P. Sheth, Amitava Das
We have fine-tuned an LLM with injected [PAUSE] tokens, allowing the LLM to pause while reading lengthier prompts.
no code implementations • 26 Mar 2024 • Anku Rani, Vipula Rawte, Harshad Sharma, Neeraj Anand, Krishnav Rajbangshi, Amit Sheth, Amitava Das
The troubling rise of hallucination presents perhaps the most significant impediment to the advancement of responsible AI.
1 code implementation • 2 Jan 2024 • S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, Amitava Das
As Large Language Models (LLMs) continue to advance in their ability to write human-like text, a key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded.
no code implementations • 8 Oct 2023 • Vipula Rawte, Swagata Chakraborty, Agnibh Pathak, Anubhav Sarkar, S. M Towhidul Islam Tonmoy, Aman Chadha, Amit P. Sheth, Amitava Das
Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI).
no code implementations • 20 Sep 2023 • Vipula Rawte, Prachi Priya, S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Amit Sheth, Amitava Das
As Large Language Models (LLMs) have advanced, they have brought forth new challenges, with one of the prominent issues being LLM hallucination.
1 code implementation • 12 Sep 2023 • Vipula Rawte, Amit Sheth, Amitava Das
Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information.
no code implementations • 13 May 2023 • Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Kalyan, Amit Sheth
LMs augmented with ProKnow guided method generated 89% safer questions in the depression and anxiety domain.