no code implementations • 4 Nov 2022 • Avishek Anand, Lijun Lyu, Maximilian Idahl, Yumeng Wang, Jonas Wallat, Zijian Zhang
Explainable information retrieval is an emerging research area aiming to make transparent and trustworthy information retrieval systems.
no code implementations • NAACL (TrustNLP) 2021 • Maximilian Idahl, Lijun Lyu, Ujwal Gadiraju, Avishek Anand
Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision.
no code implementations • 23 Mar 2020 • Eric Müller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Ralph Ewerth
In this paper, we introduce a novel task of cross-modal consistency verification in real-world news and present a multimodal approach to quantify the entity coherence between image and text.
no code implementations • 11 Oct 2019 • Maximilian Idahl, Megha Khosla, Avishek Anand
In this paper we propose and study the novel problem of explaining node embeddings by finding embedded human interpretable subspaces in already trained unsupervised node representation embeddings.