1 code implementation • NAACL (NLPMC) 2021 • Khalil Mrini, Franck Dernoncourt, Walter Chang, Emilia Farcas, Ndapa Nakashole
Understanding the intent of medical questions asked by patients, or Consumer Health Questions, is an essential skill for medical Conversational AI systems.
no code implementations • EMNLP (newsum) 2021 • Khalil Mrini, Can Liu, Markus Dreyer
We introduce a deep reinforcement learning approach to topic-focused abstractive summarization, trained on rewards with a novel negative example baseline.
no code implementations • GWC 2018 • Khalil Mrini, Francis Bond
Moroccan Darija is a variant of Arabic with many influences.
no code implementations • NAACL (BioNLP) 2021 • Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
We show that both transfer learning methods combined achieve the highest ROUGE scores.
no code implementations • 5 Mar 2024 • Weizhi Wang, Khalil Mrini, Linjie Yang, Sateesh Kumar, Yu Tian, Xifeng Yan, Heng Wang
Our MLM filter can generalize to different models and tasks, and be used as a drop-in replacement for CLIPScore.
no code implementations • 20 Nov 2023 • Xiaotian Han, Quanzeng You, Yongfei Liu, Wentao Chen, Huangjie Zheng, Khalil Mrini, Xudong Lin, Yiqi Wang, Bohan Zhai, Jianbo Yuan, Heng Wang, Hongxia Yang
To mitigate this issue, we manually curate a benchmark dataset specifically designed for MLLMs, with a focus on complex reasoning tasks.
1 code implementation • COLING 2022 • Khalil Mrini, Harpreet Singh, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
no code implementations • ACL 2022 • Casey Meehan, Khalil Mrini, Kamalika Chaudhuri
User language data can contain highly sensitive personal content.
no code implementations • Findings (ACL) 2022 • Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, Hamed Firooz
Without the use of a knowledge base or candidate sets, our model sets a new state of the art in two benchmark datasets of entity linking: COMETA in the biomedical domain, and AIDA-CoNLL in the news domain.
no code implementations • ACL 2021 • Khalil Mrini, Emilia Farcas, Ndapa Nakashole
The recursive nature of our model is able to represent all levels of syntactic parse trees with only one additional self-attention layer.
1 code implementation • ACL 2021 • Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
Users of medical question answering systems often submit long and detailed questions, making it hard to achieve high recall in answer retrieval.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Khalil Mrini, Franck Dernoncourt, Quan Tran, Trung Bui, Walter Chang, Ndapa Nakashole
Finally, we find that the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.
Ranked #1 on Dependency Parsing on Penn Treebank
no code implementations • 26 Feb 2019 • Khalil Mrini, Claudiu Musat, Michael Baeriswyl, Martin Jaggi
We show our model's interpretability by visualizing how our model distributes attention inside a document.
no code implementations • RANLP 2017 • Khalil Mrini, Martin Benjamin
We propose methods to link automatically parsed linguistic data to the WordNet.