no code implementations • EMNLP 2020 • Lifu Huang, Heng Ji
We design a Semi-Supervised Vector Quantized Variational Autoencoder framework to automatically learn a discrete latent type representation for each seen and unseen type and optimize them using seen type event annotations.
1 code implementation • BioNLP (ACL) 2022 • Sidhant Chandak, Liqing Zhang, Connor Brown, Lifu Huang
Antibiotic resistance has become a growing worldwide concern as new resistance mechanisms are emerging and spreading globally, and thus detecting and collecting the cause – Antibiotic Resistance Genes (ARGs), have been more critical than ever.
no code implementations • NAACL (ACL) 2022 • Muhao Chen, Lifu Huang, Manling Li, Ben Zhou, Heng Ji, Dan Roth
This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources.
no code implementations • 3 Apr 2024 • Ying Shen, Yizhe Zhang, Shuangfei Zhai, Lifu Huang, Joshua M. Susskind, Jiatao Gu
This paper introduces a domain-general framework for many-to-many image generation, capable of producing interrelated image series from a given set of images, offering a scalable solution that obviates the need for task-specific solutions across different multi-image scenarios.
1 code implementation • 6 Mar 2024 • Hanzi Xu, Muhao Chen, Lifu Huang, Slobodan Vucetic, Wenpeng Yin
In recent years, few-shot and zero-shot learning, which learn to predict labels with limited annotated instances, have garnered significant attention.
no code implementations • 24 Feb 2024 • Ying Shen, Zhiyang Xu, Qifan Wang, Yu Cheng, Wenpeng Yin, Lifu Huang
Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks.
no code implementations • 18 Feb 2024 • Zhiyang Xu, Chao Feng, Rulin Shao, Trevor Ashby, Ying Shen, Di Jin, Yu Cheng, Qifan Wang, Lifu Huang
Despite vision-language models' (VLMs) remarkable capabilities as versatile visual assistants, two substantial challenges persist within the existing VLM frameworks: (1) lacking task diversity in pretraining and visual instruction tuning, and (2) annotation error and bias in GPT-4 synthesized instruction tuning data.
no code implementations • 16 Feb 2024 • Zihao Lin, Mohammad Beigi, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Qifan Wang, Wenpeng Yin, Lifu Huang
Our in-depth study advocates more careful use of ME in real-world scenarios.
no code implementations • 23 Jan 2024 • Cheng Han, Qifan Wang, Yiming Cui, Wenguan Wang, Lifu Huang, Siyuan Qi, Dongfang Liu
As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning.
1 code implementation • 10 Jan 2024 • Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
1 code implementation • 7 Dec 2023 • Jaehyung Kim, Yuning Mao, Rui Hou, Hanchao Yu, Davis Liang, Pascale Fung, Qifan Wang, Fuli Feng, Lifu Huang, Madian Khabsa
Under a unified evaluation of fine-tuned LMs by incorporating four representative perspectives of model robustness, we demonstrate the effectiveness of RoAST compared to state-of-the-art fine-tuning methods on six different types of LMs, which indicates its usefulness in practice.
no code implementations • 15 Nov 2023 • Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eunah Cho, Vaibhav Kumar, Reza Ghanadan, Lifu Huang
Natural Language Generation (NLG) typically involves evaluating the generated text in various aspects (e. g., consistency and naturalness) to obtain a comprehensive assessment.
1 code implementation • 8 Oct 2023 • Jingyuan Qi, Minqian Liu, Ying Shen, Zhiyang Xu, Lifu Huang
Automatically generating scripts (i. e. sequences of key steps described in text) from video demonstrations and reasoning about the subsequent steps are crucial to the modern AI virtual assistants to guide humans to complete everyday tasks, especially unfamiliar ones.
no code implementations • 4 Oct 2023 • Zihao Lin, Yan Sun, Yifan Shi, Xueqian Wang, Lifu Huang, Li Shen, DaCheng Tao
With the blowout development of pre-trained models (PTMs), the efficient tuning of these models for diverse downstream applications has emerged as a pivotal research concern.
no code implementations • 23 Sep 2023 • Hanwen Zheng, Sijia Wang, Lifu Huang
Document-level information extraction (IE) is a crucial task in natural language processing (NLP).
1 code implementation • 26 May 2023 • Minqian Liu, Lifu Huang
Class-incremental learning (CIL) aims to develop a learning system that can continually learn new classes from a data stream without forgetting previously learned classes.
1 code implementation • 24 May 2023 • Jingyuan Qi, Zhiyang Xu, Ying Shen, Minqian Liu, Di Jin, Qifan Wang, Lifu Huang
Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps.
no code implementations • 24 May 2023 • Xiaochu Li, Minqian Liu, Zhiyang Xu, Lifu Huang
To solve these challenges, we propose joint biomedical entity linking and event extraction by regarding the event structures and entity references in knowledge bases as latent variables and updating the two task-specific models in a hard Expectation-Maximization (EM) fashion: (1) predicting the missing variables for each partially annotated dataset based on the current two task-specific models, and (2) updating the parameters of each model on the corresponding pseudo completed dataset.
no code implementations • 24 May 2023 • Barry Menglong Yao, Yu Chen, Qifan Wang, Sijia Wang, Minqian Liu, Zhiyang Xu, Licheng Yu, Lifu Huang
We propose attribute-aware multimodal entity linking, where the input is a mention described with a text and image, and the goal is to predict the corresponding target entity from a multimodal knowledge base (KB) where each entity is also described with a text description, a visual image and a set of attributes and values.
no code implementations • 24 May 2023 • Pritika Ramu, Sijia Wang, Lalla Mouatadid, Joy Rimchala, Lifu Huang
Current research in form understanding predominantly relies on large pre-trained language models, necessitating extensive data for pre-training.
no code implementations • 26 Apr 2023 • Mingchen Li, Lifu Huang
Open domain entity state tracking aims to predict reasonable state changes of entities (i. e., [attribute] of [entity] was [before_state] and [after_state] afterwards) given the action descriptions.
1 code implementation • 21 Jan 2023 • Sai Gurrapu, Lifu Huang, Feras A. Batarseh
We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence.
no code implementations • 21 Jan 2023 • Sai Gurrapu, Ajay Kulkarni, Lifu Huang, Ismini Lourentzou, Laura Freeman, Feras A. Batarseh
Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users.
Explainable Artificial Intelligence (XAI) Question Answering +3
1 code implementation • 21 Dec 2022 • Zhiyang Xu, Ying Shen, Lifu Huang
Our results indicate that fine-tuning the model on a diverse set of tasks and instructions leads to a reduced sensitivity to variations in instructions for each task.
1 code implementation • 26 Oct 2022 • Zhe Hu, Hou Pong Chan, Lifu Huang
Teaching neural models to generate narrative coherent texts is a critical problem.
1 code implementation • 25 Aug 2022 • Qingyun Wang, Manling Li, Hou Pong Chan, Lifu Huang, Julia Hockenmaier, Girish Chowdhary, Heng Ji
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities.
1 code implementation • 25 May 2022 • Barry Menglong Yao, Aditya Shah, Lichao Sun, Jin-Hee Cho, Lifu Huang
We propose end-to-end multimodal fact-checking and explanation generation, where the input is a claim and a large collection of web sources, including articles, images, videos, and tweets, and the goal is to assess the truthfulness of the claim by retrieving relevant evidence and predicting a truthfulness label (e. g., support, refute or not enough information), and to generate a statement to summarize and explain the reasoning and ruling process.
no code implementations • 25 May 2022 • Zhiyang Xu, Jay-Yoon Lee, Lifu Huang
Data scarcity has been the main factor that hinders the progress of event extraction.
no code implementations • 16 Apr 2022 • Zijian Jin, Xingyu Zhang, Mo Yu, Lifu Huang
Script knowledge is critical for humans to understand the broad daily tasks and routine activities in the world.
1 code implementation • COLING 2022 • Minqian Liu, Shiyu Chang, Lifu Huang
Lifelong event detection aims to incrementally update a model with new event types and data while retaining the capability on previously learned old types.
no code implementations • 15 Apr 2022 • Apoorv Garg, Deval Srivastava, Zhiyang Xu, Lifu Huang
Due to the superior performance, large-scale pre-trained language models (PLMs) have been widely adopted in many aspects of human society.
no code implementations • 14 Apr 2022 • Sijia Wang, Mo Yu, Lifu Huang
We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zero-shot event detection.
1 code implementation • 17 Mar 2022 • Kai Zhang, Yu Wang, Hongyi Wang, Lifu Huang, Carl Yang, Xun Chen, Lichao Sun
Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE.
no code implementations • ACL 2022 • Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang
Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.
no code implementations • 15 Mar 2022 • Xiangyang Mou, Mo Yu, Bingsheng Yao, Lifu Huang
Pre-trained Transformer models have achieved successes in a wide range of NLP tasks, but are inefficient when dealing with long input sequences.
no code implementations • 18 Jan 2022 • Li Lin, Yixin Cao, Lifu Huang, Shu'ang Li, Xuming Hu, Lijie Wen, Jianmin Wang
To alleviate the knowledge forgetting issue, we design two modules, Im and Gm, for each type of knowledge, which are combined via prompt tuning.
2 code implementations • 20 Dec 2021 • Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander Schwing, Heng Ji
Specifically, the task involves multi-hop questions that require reasoning over image-caption pairs to identify the grounded visual object being referred to and then predicting a span from the news body text to answer the question.
no code implementations • Findings (ACL) 2022 • Sijia Wang, Mo Yu, Shiyu Chang, Lichao Sun, Lifu Huang
Event extraction is typically modeled as a multi-class classification problem where event types and argument roles are treated as atomic symbols.
1 code implementation • ACL 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • 26 Jul 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • Findings (NAACL) 2022 • Shuaicheng Zhang, Lifu Huang, Qiang Ning
Extracting temporal relations (e. g., before, after, and simultaneous) among events is crucial to natural language understanding.
no code implementations • 16 Apr 2021 • Yu Wang, Lifu Huang, Philip S. Yu, Lichao Sun
Membership inference attacks (MIAs) infer whether a specific data record is used for target model training.
1 code implementation • EMNLP 2021 • Manling Li, Sha Li, Zhenhailong Wang, Lifu Huang, Kyunghyun Cho, Heng Ji, Jiawei Han, Clare Voss
We introduce a new concept of Temporal Complex Event Schema: a graph-based schema representation that encompasses events, arguments, temporal connections and argument relations.
no code implementations • CONLL 2018 • Boliang Zhang, Spencer Whitehead, Lifu Huang, Heng Ji
Many name tagging approaches use local contextual information with much success, but fail when the local context is ambiguous or limited.
1 code implementation • INLG (ACL) 2020 • Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, Nazneen Fatema Rajani
To assist human review process, we build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
no code implementations • IJCNLP 2019 • Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi
In this paper, we introduce Cosmos QA, a large-scale dataset of 35, 600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions.
no code implementations • NAACL 2019 • Diya Li, Lifu Huang, Heng Ji, Jiawei Han
Event extraction for the biomedical domain is more challenging than that in the general news domain since it requires broader acquisition of domain-specific knowledge and deeper understanding of complex contexts.
no code implementations • NAACL 2019 • Lifu Huang, Heng Ji, Jonathan May
We focus on improving name tagging for low-resource languages using annotations from related languages.
2 code implementations • ACL 2019 • Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper.
no code implementations • EMNLP 2018 • Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heng Ji, Lejian Liao, He-Yan Huang
Relation Extraction suffers from dramatical performance decrease when training a model on one genre and directly applying it to a new genre, due to the distinct feature distributions.
1 code implementation • WS 2018 • Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, Kevin Knight
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
1 code implementation • WS 2018 • Zhiying Jiang, Boliang Zhang, Lifu Huang, Heng Ji
We present a neural recommendation model for Chengyu, which is a special type of Chinese idiom.
no code implementations • NAACL 2018 • Bhavana Dalvi Mishra, Lifu Huang, Niket Tandon, Wen-tau Yih, Peter Clark
The new dataset, ProPara, is the first to contain natural (rather than machine-generated) text about a changing world along with a full annotation of entity states (location and existence) during those changes (81k datapoints).
Ranked #4 on Procedural Text Understanding on ProPara
2 code implementations • ACL 2018 • Qingyun Wang, Zhi-Hao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, Kevin Knight
We present a paper abstract writing system based on an attentive neural sequence-to-sequence model that can take a title as input and automatically generate an abstract.
Ranked #1 on Paper generation on ACL Title and Abstract Dataset
no code implementations • EMNLP 2018 • Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang
Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images.
no code implementations • EMNLP 2018 • Lifu Huang, Kyunghyun Cho, Boliang Zhang, Heng Ji, Kevin Knight
We construct a multilingual common semantic space based on distributional semantics, where words from multiple languages are projected into a shared space to enable knowledge and resource transfer across languages.
no code implementations • IJCNLP 2017 • Dian Yu, Lifu Huang, Heng Ji
Previous open Relation Extraction (open RE) approaches mainly rely on linguistic patterns and constraints to extract important relational triples from large-scale corpora.
no code implementations • WS 2017 • Zhihao Zhou, Lifu Huang, Heng Ji
Learning phrase representations has been widely explored in many Natural Language Processing (NLP) tasks (e. g., Sentiment Analysis, Machine Translation) and has shown promising improvements.
1 code implementation • ACL 2018 • Lifu Huang, Heng Ji, Kyunghyun Cho, Clare R. Voss
Most previous event extraction studies have relied heavily on features derived from annotated event mentions, thus cannot be applied to new event types without annotation effort.
no code implementations • EMNLP 2017 • Lifu Huang, Avirup Sil, Heng Ji, Radu Florian
Slot Filling (SF) aims to extract the values of certain types of attributes (or slots, such as person:cities\_of\_residence) for a given entity from a large collection of source documents.
no code implementations • ACL 2017 • Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, Juanzi Li
Integrating text and knowledge into a unified semantic space has attracted significant research interests recently.
no code implementations • 10 Mar 2016 • Lifu Huang, Jonathan May, Xiaoman Pan, Heng Ji
Recent research has shown great progress on fine-grained entity typing.