no code implementations • 31 Dec 2023 • Shreyas Verma, Manoj Parmar, Palash Choudhary, Sanchita Porwal
Answering questions using pre-trained language models (LMs) and knowledge graphs (KGs) presents challenges in identifying relevant knowledge and performing joint reasoning. We compared LMs (fine-tuned for the task) with the previously published QAGNN method for the Question-answering (QA) objective and further measured the impact of additional factual context on the QAGNN performance.
no code implementations • 22 Dec 2023 • Samaksh Gulati, Anshit Verma, Manoj Parmar, Palash Chaudhary
Large "instruction-tuned" language models (i. e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks.