no code implementations • 12 Mar 2024 • Tianqing Fang, Zeming Chen, Yangqiu Song, Antoine Bosselut
Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit context underlying that relationship.
1 code implementation • 12 Dec 2023 • Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang
While the pseudo-label method has demonstrated considerable success in semi-supervised object detection tasks, this paper uncovers notable limitations within this approach.
Ranked #1 on Semi-Supervised Object Detection on COCO 100% labeled data (using extra training data)
1 code implementation • 27 Nov 2023 • Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, Antoine Bosselut
Large language models (LLMs) can potentially democratize access to medical knowledge.
Ranked #1 on Multiple Choice Question Answering (MCQA) on MedMCQA (Dev Set (Acc-%) metric)
Conditional Text Generation Multiple Choice Question Answering (MCQA)
no code implementations • 4 Oct 2023 • Deniz Bayazit, Negar Foroutan, Zeming Chen, Gail Weiss, Antoine Bosselut
In this work, we investigate whether pretrained language models contain various knowledge-critical subnetworks: particular sparse computational subgraphs responsible for encoding specific knowledge the model has memorized.
1 code implementation • 28 May 2023 • Yu Fei, Yifan Hou, Zeming Chen, Antoine Bosselut
In this work, we define a typology for three types of label biases in ICL for text classification: vanilla-label bias, context-label bias, and domain-label bias (which we conceptualize and detect for the first time).
no code implementations • NeurIPS 2023 • Zeming Chen, Gail Weiss, Eric Mitchell, Asli Celikyilmaz, Antoine Bosselut
In the outer loop, the model learns to use the updated weights to reproduce and answer reasoning questions about the memorized knowledge.
1 code implementation • 20 Dec 2022 • Zeming Chen, Qiyue Gao, Antoine Bosselut, Ashish Sabharwal, Kyle Richardson
However, high-quality counterfactual data is scarce for most tasks and not easily generated at scale.
no code implementations • NAACL 2022 • Zeming Chen, Qiyue Gao
In the age of large transformer language models, linguistic evaluation play an important role in diagnosing models' abilities and limitations on natural language understanding.
1 code implementation • 3 Dec 2021 • Zeming Chen, Qiyue Gao
We propose a methodology for probing linguistic information for logical inference in pre-trained language model representations.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2021 • Zeming Chen, Qiyue Gao, Lawrence S. Moss
Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI).
Ranked #1 on Natural Language Inference on MED
1 code implementation • IWCS (ACL) 2021 • Zeming Chen, Qiyue Gao
Dependency parsing is a tool widely used in the field of Natural language processing and computational linguistics.
no code implementations • ACL (NALOMA, IWCS) 2021 • Zeming Chen
We show and attempt to explain that our model outperforms existing models on MED.