Search Results for author: Kaiji Lu

Found 6 papers, 1 papers with code

Learning Modulo Theories

no code implementations26 Jan 2023 Matt Fredrikson, Kaiji Lu, Saranya Vijayakumar, Somesh Jha, Vijay Ganesh, Zifan Wang

Recent techniques that integrate \emph{solver layers} into Deep Neural Networks (DNNs) have shown promise in bridging a long-standing gap between inductive learning and symbolic reasoning techniques.

Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models

no code implementations1 Jun 2022 Kaiji Lu, Anupam Datta

Previous works show that deep NLP models are not always conceptually sound: they do not always learn the correct linguistic concepts.

Data Augmentation Negation +1

Influence Patterns for Explaining Information Flow in BERT

no code implementations NeurIPS 2021 Kaiji Lu, Zifan Wang, Piotr Mardziel, Anupam Datta

While attention is all you need may be proving true, we do not know why: attention-based transformer models such as BERT are superior but how information flows from input tokens to output predictions are unclear.

ABSTRACTING INFLUENCE PATHS FOR EXPLAINING (CONTEXTUALIZATION OF) BERT MODELS

no code implementations28 Sep 2020 Kaiji Lu, Zifan Wang, Piotr Mardziel, Anupam Datta

While “attention is all you need” may be proving true, we do not yet know why: attention-based transformer models such as BERT are superior but how they contextualize information even for simple grammatical rules such as subject-verb number agreement(SVA) is uncertain.

Cannot find the paper you are looking for? You can Submit a new open access paper.