1 code implementation • 20 Mar 2024 • Junho Kim, Yeon Ju Kim, Yong Man Ro
This paper presents a way of enhancing the reliability of Large Multimodal Models (LMMs) in addressing hallucination effects, where models generate incorrect or unrelated responses.