no code implementations • 31 Jan 2024 • Sagi Shaier, Kevin Bennett, Lawrence E Hunter, Katharina von der Wense
(RQ2) Do models' absolute scores differ between the two approaches?
no code implementations • 31 Jan 2024 • Sagi Shaier, Lawrence E Hunter, Katharina von der Wense
Prior work has uncovered a set of common problems in state-of-the-art context-based question answering (QA) systems: a lack of attention to the context when the latter conflicts with a model's parametric knowledge, little robustness to noise, and a lack of consistency with their answers.
no code implementations • 16 Oct 2023 • Sagi Shaier, Lawrence E. Hunter, Katharina von der Wense
In this opinion piece, we argue that LMs in their current state will never be fully trustworthy in critical settings and suggest a possible novel strategy to handle this issue: by building LMs such that can cite their sources - i. e., point a user to the parts of their training data that back up their outputs.
1 code implementation • 16 Oct 2023 • Sagi Shaier, Kevin Bennett, Lawrence Hunter, Katharina von der Wense
State-of-the-art question answering (QA) models exhibit a variety of social biases (e. g., with respect to sex or race), generally explained by similar issues in their training data.
no code implementations • 19 Dec 2022 • Sagi Shaier, Lawrence Hunter, Katharina Kann
Many dialogue systems (DSs) lack characteristics humans have, such as emotion perception, factuality, and informativeness.
1 code implementation • 11 Oct 2021 • Sagi Shaier, Maziar Raissi, Padmanabhan Seshaiyer
This approach builds on a successful physics informed neural network approaches that have been applied to a variety of applications that can be modeled by linear and non-linear ordinary and partial differential equations.