Search Results for author: Zhiyuan Liu

Found 347 papers, 250 papers with code

Going “Deeper”: Structured Sememe Prediction via Transformer with Tree Attention

1 code implementation Findings (ACL) 2022 Yining Ye, Fanchao Qi, Zhiyuan Liu, Maosong Sun

However, all existing sememe prediction studies ignore the hierarchical structures of sememes, which are important in the sememe-based semantic description system.

CodRED: A Cross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild

1 code implementation EMNLP 2021 Yuan YAO, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie zhou, Maosong Sun

Existing relation extraction (RE) methods typically focus on extracting relational facts between entity pairs within single sentences or documents.

Relation Relation Extraction

BMInf: An Efficient Toolkit for Big Model Inference and Tuning

1 code implementation ACL 2022 Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie zhou, Jun Zhang, Jia Chao, Maosong Sun

In recent years, large-scale pre-trained language models (PLMs) containing billions of parameters have achieved promising results on various NLP tasks.

Quantization Scheduling

Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach

1 code implementation Findings (ACL) 2022 Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.

Knowledge Graph Completion Link Prediction

Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained Models

1 code implementation ACL 2022 Biru Zhu, Yujia Qin, Fanchao Qi, Yangdong Deng, Zhiyuan Liu, Maosong Sun, Ming Gu

To validate our viewpoints, we design two methods to evaluate the robustness of FMS: (1) model disguise attack, which post-trains an inferior PTM with a contrastive objective, and (2) evaluation data selection, which selects a subset of the data points for FMS evaluation based on K-means clustering.

Backdoor Attack Model Selection

PreGSU-A Generalized Traffic Scene Understanding Model for Autonomous Driving based on Pre-trained Graph Attention Network

no code implementations16 Apr 2024 Yuning Wang, Zhiyuan Liu, Haotian Lin, Junkai Jiang, Shaobing Xu, Jianqiang Wang

In this study, we propose PreGSU, a generalized pre-trained scene understanding model based on graph attention network to learn the universal interaction and reasoning of traffic scenes to support various downstream tasks.

Autonomous Driving Feature Engineering +4

UltraEval: A Lightweight Platform for Flexible and Comprehensive Evaluation for LLMs

1 code implementation11 Apr 2024 Chaoqun He, Renjie Luo, Shengding Hu, Yuanqian Zhao, Jie zhou, Hanghao Wu, Jiajie Zhang, Xu Han, Zhiyuan Liu, Maosong Sun

The rapid development of LLMs calls for a lightweight and easy-to-use framework for swift evaluation deployment.

Robust and Scalable Model Editing for Large Language Models

1 code implementation26 Mar 2024 Yingfa Chen, Zhengyan Zhang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Chen Chen, Kuai Li, Tao Yang, Maosong Sun

Large language models (LLMs) can make predictions using parametric knowledge--knowledge encoded in the model weights--or contextual knowledge--knowledge presented in the context.

Model Editing

LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images

2 code implementations18 Mar 2024 Ruyi Xu, Yuan YAO, Zonghao Guo, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, Maosong Sun, Gao Huang

To address the challenges, we present LLaVA-UHD, a large multimodal model that can efficiently perceive images in any aspect ratio and high resolution.

BurstAttention: An Efficient Distributed Attention Framework for Extremely Long Sequences

1 code implementation14 Mar 2024 Sun Ao, Weilin Zhao, Xu Han, Cheng Yang, Zhiyuan Liu, Chuan Shi, Maosong Sun, Shengnan Wang, Teng Su

Effective attention modules have played a crucial role in the success of Transformer-based large language models (LLMs), but the quadratic time and memory complexities of these attention modules also pose a challenge when processing long sequences.

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models

no code implementations13 Mar 2024 Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Underlying data distributions of natural language, programming code, and mathematical symbols vary vastly, presenting a complex challenge for large language models (LLMs) that strive to achieve high performance across all three domains simultaneously.

Math

StableToolBench: Towards Stable Large-Scale Benchmarking on Tool Learning of Large Language Models

2 code implementations12 Mar 2024 Zhicheng Guo, Sijie Cheng, Hao Wang, Shihao Liang, Yujia Qin, Peng Li, Zhiyuan Liu, Maosong Sun, Yang Liu

The virtual API server contains a caching system and API simulators which are complementary to alleviate the change in API status.

Benchmarking

Yi: Open Foundation Models by 01.AI

1 code implementation7 Mar 2024 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, Zonghong Dai

The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long context models, depth-upscaled models, and vision-language models.

Attribute Chatbot +2

LLM-Oriented Retrieval Tuner

no code implementations4 Mar 2024 Si Sun, Hanqing Zhang, Zhiyuan Liu, Jie Bao, Dawei Song

Dense Retrieval (DR) is now considered as a promising tool to enhance the memorization capacity of Large Language Models (LLM) such as GPT3 and GPT-4 by incorporating external memories.

Memorization Retrieval +1

Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment

no code implementations29 Feb 2024 Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).

Navigate

Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication

1 code implementation28 Feb 2024 Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun

Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs).

Unified View of Grokking, Double Descent and Emergent Abilities: A Perspective from Circuits Competition

no code implementations23 Feb 2024 Yufei Huang, Shengding Hu, Xu Han, Zhiyuan Liu, Maosong Sun

Recent studies have uncovered intriguing phenomena in deep learning, such as grokking, double descent, and emergent abilities in large language models, which challenge human intuition and are crucial for a deeper understanding of neural models.

Memorization Multi-Task Learning

Cleaner Pretraining Corpus Curation with Neural Web Scraping

1 code implementation22 Feb 2024 Zhipeng Xu, Zhenghao Liu, Yukun Yan, Zhiyuan Liu, Chenyan Xiong, Ge Yu

The web contains large-scale, diverse, and abundant information to satisfy the information-seeking needs of humans.

Language Modelling

ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models

1 code implementation21 Feb 2024 Chenyang Song, Xu Han, Zhengyan Zhang, Shengding Hu, Xiyu Shi, Kuai Li, Chen Chen, Zhiyuan Liu, Guangli Li, Tao Yang, Maosong Sun

Some recent efforts have explored introducing ReLU or its variants as the substitutive activation function to help LLMs achieve activation sparsity and inference acceleration, but few can simultaneously obtain high sparsity and comparable model performance.

OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems

1 code implementation21 Feb 2024 Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, Maosong Sun

Notably, the best-performing model, GPT-4V, attains an average score of 17. 23% on OlympiadBench, with a mere 11. 28% in physics, highlighting the benchmark rigor and the intricacy of physical reasoning.

Logical Fallacies

Ouroboros: Speculative Decoding with Large Model Enhanced Drafting

1 code implementation21 Feb 2024 Weilin Zhao, Yuxiang Huang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Maosong Sun

In this paper, we introduce Ouroboros, which constructs a phrase candidate pool from the verification process of LLMs to provide candidates for draft generation of the small model.

Text Generation

ActiveRAG: Revealing the Treasures of Knowledge via Active Learning

1 code implementation21 Feb 2024 Zhipeng Xu, Zhenghao Liu, Yibin Liu, Chenyan Xiong, Yukun Yan, Shuo Wang, Shi Yu, Zhiyuan Liu, Ge Yu

Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks.

Active Learning Position +2

$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens

1 code implementation21 Feb 2024 Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, JunHao Chen, Moo Khai Hao, Xu Han, Zhen Leng Thai, Shuo Wang, Zhiyuan Liu, Maosong Sun

Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction.

Large Language Model-based Human-Agent Collaboration for Complex Task Solving

1 code implementation20 Feb 2024 Xueyang Feng, Zhi-Yuan Chen, Yujia Qin, Yankai Lin, Xu Chen, Zhiyuan Liu, Ji-Rong Wen

We construct a human-agent collaboration dataset to train this policy model in an offline reinforcement learning environment.

Language Modelling Large Language Model +1

MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization

1 code implementation18 Feb 2024 Zhiyu Yang, Zihan Zhou, Shuo Wang, Xin Cong, Xu Han, Yukun Yan, Zhenghao Liu, Zhixing Tan, Pengyuan Liu, Dong Yu, Zhiyuan Liu, Xiaodong Shi, Maosong Sun

Scientific data visualization plays a crucial role in research by enabling the direct display of complex information and assisting researchers in identifying implicit patterns.

Code Generation Data Visualization

LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks

no code implementations18 Feb 2024 Hanqing Wang, Bowen Ping, Shuo Wang, Xu Han, Yun Chen, Zhiyuan Liu, Maosong Sun

Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights.

Math

OneBit: Towards Extremely Low-bit Large Language Models

no code implementations17 Feb 2024 Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che

Model quantification uses low bit-width values to represent the weight matrices of models, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs.

Quantization

Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents

1 code implementation14 Feb 2024 Cheng Qian, Bingxiang He, Zhong Zhuang, Jia Deng, Yujia Qin, Xin Cong, Zhong Zhang, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.

Language Modelling

InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory

no code implementations7 Feb 2024 Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, Maosong Sun

To alleviate these issues, existing efforts employ sliding attention windows and discard distant tokens to achieve the processing of extremely long sequences.

UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset

1 code implementation7 Feb 2024 Haoyu Wang, Shuo Wang, Yukun Yan, Xujia Wang, Zhiyu Yang, Yuzhuang Xu, Zhenghao Liu, Liner Yang, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun

Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs.

Cross-Lingual Transfer Data Augmentation

MolTC: Towards Molecular Relational Modeling In Language Models

1 code implementation6 Feb 2024 Junfeng Fang, Shuai Zhang, Chang Wu, Zhengyi Yang, Zhiyuan Liu, Sihang Li, Kun Wang, Wenjie Du, Xiang Wang

Molecular Relational Learning (MRL), aiming to understand interactions between molecular pairs, plays a pivotal role in advancing biochemical research.

Relational Reasoning

ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs

no code implementations6 Feb 2024 Zhengyan Zhang, Yixin Song, Guanghui Yu, Xu Han, Yankai Lin, Chaojun Xiao, Chenyang Song, Zhiyuan Liu, Zeyu Mi, Maosong Sun

To find the most efficient activation function for sparse computation, we propose a systematic framework to examine the sparsity of LLMs from three aspects: the trade-off between sparsity and performance, the predictivity of sparsity, and the hardware affinity.

UniMem: Towards a Unified View of Long-Context Large Language Models

no code implementations5 Feb 2024 Junjie Fang, Likai Tang, Hongzhe Bi, Yujia Qin, Si Sun, Zhenyu Li, Haolun Li, Yongjian Li, Xin Cong, Yukun Yan, Xiaodong Shi, Sen Song, Yankai Lin, Zhiyuan Liu, Maosong Sun

Although there exist various methods devoted to enhancing the long-context processing ability of large language models (LLMs), they are developed in an isolated manner and lack systematic analysis and integration of their strengths, hindering further developments.

Management

Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent Self-Evolution

no code implementations25 Jan 2024 Cheng Qian, Shihao Liang, Yujia Qin, Yining Ye, Xin Cong, Yankai Lin, Yesai Wu, Zhiyuan Liu, Maosong Sun

This paper introduces Investigate-Consolidate-Exploit (ICE), a novel strategy for enhancing the adaptability and flexibility of AI agents through inter-task self-evolution.

Towards 3D Molecule-Text Interpretation in Language Models

1 code implementation25 Jan 2024 Sihang Li, Zhiyuan Liu, Yanchen Luo, Xiang Wang, Xiangnan He, Kenji Kawaguchi, Tat-Seng Chua, Qi Tian

Through 3D molecule-text alignment and 3D molecule-centric instruction tuning, 3D-MoLM establishes an integration of 3D molecular encoder and LM.

Instruction Following Language Modelling +3

DebugBench: Evaluating Debugging Capability of Large Language Models

1 code implementation9 Jan 2024 Runchu Tian, Yining Ye, Yujia Qin, Xin Cong, Yankai Lin, Yinxu Pan, Yesai Wu, Zhiyuan Liu, Maosong Sun

Previous evaluations of LLMs' debugging ability are significantly limited by the risk of data leakage, the scale of the dataset, and the variety of tested bugs.

Code Generation

Experiential Co-Learning of Software-Developing Agents

1 code implementation28 Dec 2023 Chen Qian, Yufan Dang, Jiahao Li, Wei Liu, Weize Chen, Cheng Yang, Zhiyuan Liu, Maosong Sun

Recent advancements in large language models (LLMs) have brought significant changes to various domains, especially through LLM-driven autonomous agents.

GitAgent: Facilitating Autonomous Agent with GitHub by Tool Extension

no code implementations28 Dec 2023 Bohan Lyu, Xin Cong, Heyang Yu, Pan Yang, Yujia Qin, Yining Ye, Yaxi Lu, Zhong Zhang, Yukun Yan, Yankai Lin, Zhiyuan Liu, Maosong Sun

As GitHub has hosted a multitude of repositories which can be seen as a good resource for tools, a promising solution is that LLM-based agents can autonomously integrate the repositories in GitHub according to the user queries to extend their tool set.

D-Bot: Database Diagnosis System using Large Language Models

1 code implementation3 Dec 2023 Xuanhe Zhou, Guoliang Li, Zhaoyan Sun, Zhiyuan Liu, Weize Chen, Jianming Wu, Jiesi Liu, Ruohang Feng, Guoyang Zeng

Database administrators (DBAs) play an important role in managing, maintaining and optimizing database systems.

RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

2 code implementations1 Dec 2023 Tianyu Yu, Yuan YAO, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, Tat-Seng Chua

Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction.

Hallucination

Sparse Low-rank Adaptation of Pre-trained Language Models

1 code implementation20 Nov 2023 Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process.

Memorization

INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair

1 code implementation16 Nov 2023 Hanbin Wang, Zhenghao Liu, Shuo Wang, Ganqu Cui, Ning Ding, Zhiyuan Liu, Ge Yu

INTERVENOR prompts Large Language Models (LLMs) to play distinct roles during the code repair process, functioning as both a Code Learner and a Code Teacher.

Code Repair Code Translation

MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation

1 code implementation15 Nov 2023 Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li

Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.

Event Argument Extraction Event Detection +3

NExT-Chat: An LMM for Chat, Detection and Segmentation

1 code implementation8 Nov 2023 Ao Zhang, Yuan YAO, Wei Ji, Zhiyuan Liu, Tat-Seng Chua

The development of large language models (LLMs) has greatly advanced the field of multimodal understanding, leading to the emergence of large multimodal models (LMMs).

Referring Expression Referring Expression Segmentation +1

ProAgent: From Robotic Process Automation to Agentic Process Automation

1 code implementation2 Nov 2023 Yining Ye, Xin Cong, Shizuo Tian, Jiannan Cao, Hao Wang, Yujia Qin, Yaxi Lu, Heyang Yu, Huadong Wang, Yankai Lin, Zhiyuan Liu, Maosong Sun

Empirical experiments are conducted to detail its construction and execution procedure of workflow, showcasing the feasibility of APA, unveiling the possibility of a new paradigm of automation driven by agents.

Decision Making

WebDRO: A Web-based Group-level Clustering and Reweighting Method for Unsupervised Dense Retrieval

1 code implementation25 Oct 2023 Peixuan Han, Zhenghao Liu, Zhiyuan Liu, Chenyan Xiong

In this paper, we introduce WebDRO, an efficient approach for clustering the web graph data and optimizing group weights to enhance the robustness of the pretraining process of dense retrieval models on web graphs.

Clustering Link Prediction +2

MUSER: A Multi-View Similar Case Retrieval Dataset

1 code implementation24 Oct 2023 Qingquan Li, Yiran Hu, Feng Yao, Chaojun Xiao, Zhiyuan Liu, Maosong Sun, Weixing Shen

Furthermore, the case similarities are typically measured solely by the textual semantics of the fact descriptions, which may fail to capture the full complexity of legal cases from the perspective of legal knowledge.

Fairness Retrieval +3

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Computational Efficiency

Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules

1 code implementation NeurIPS 2023 Zhiyuan Liu, Yaorui Shi, An Zhang, Enzhi Zhang, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua

Our results show that a subgraph-level tokenizer and a sufficiently expressive decoder with remask decoding have a large impact on the encoder's representation learning.

Representation Learning Self-Supervised Learning

MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Module Plugin

1 code implementation21 Oct 2023 Tianshuo Zhou, Sen Mei, Xinze Li, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, Yu Gu, Ge Yu

To facilitate the multi-modal retrieval tasks, we build the ClueWeb22-MM dataset based on the ClueWeb22 dataset, which regards anchor texts as queries, and exacts the related text and image documents from anchor-linked web pages.

Language Modelling Retrieval +1

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

no code implementations20 Oct 2023 Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan YAO, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie zhou

With the thriving of pre-trained language model (PLM) widely verified in various of NLP tasks, pioneer efforts attempt to explore the possible cooperation of the general textual information in PLM with the personalized behavioral information in user historical behavior sequences to enhance sequential recommendation (SR).

Informativeness Language Modelling +1

ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction

1 code implementation20 Oct 2023 Yaorui Shi, An Zhang, Enzhi Zhang, Zhiyuan Liu, Xiang Wang

Predicting chemical reactions, a fundamental challenge in chemistry, involves forecasting the resulting products from a given reaction process.

Chemical Reaction Prediction

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

no code implementations19 Oct 2023 Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise.

Toolink: Linking Toolkit Creation and Using through Chain-of-Solving on Open-Source Model

1 code implementation8 Oct 2023 Cheng Qian, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu

We first validate the efficacy of Toolink in harnessing the model's creativity and CoS ability on ChatGPT.

valid

UltraFeedback: Boosting Language Models with High-quality Feedback

2 code implementations2 Oct 2023 Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, Maosong Sun

However, the scarcity of diverse, naturalistic datasets of human preferences on LLM outputs at scale poses a great challenge to RLHF as well as feedback learning research within the open-source community.

Language Modelling

Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants

2 code implementations1 Oct 2023 Tianyu Yu, Jinyi Hu, Yuan YAO, Haoye Zhang, Yue Zhao, Chongyi Wang, Shan Wang, Yinxv Pan, Jiao Xue, Dahai Li, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun

The capabilities of MLLMs depend on two crucial factors: the model architecture to facilitate the feature alignment of visual modules and large language models; the multimodal instruction tuning datasets for human instruction following.

Instruction Following

ConPET: Continual Parameter-Efficient Tuning for Large Language Models

1 code implementation26 Sep 2023 Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, Maosong Sun, Tao Yang

First, Static ConPET can adapt former continual learning methods originally designed for relatively smaller models to LLMs through PET and a dynamic replay strategy, which largely reduces the tuning costs and alleviates the over-fitting and forgetting issue.

Continual Learning

QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation

no code implementations19 Sep 2023 Kunlun Zhu, Shihao Liang, Xu Han, Zhi Zheng, Guoyang Zeng, Zhiyuan Liu, Maosong Sun

Recent years have witnessed the success of question answering (QA), especially its potential to be a foundation paradigm for tackling diverse NLP tasks.

Data Augmentation Question Answering

Instant Photorealistic Style Transfer: A Lightweight and Adaptive Approach

no code implementations18 Sep 2023 Rong Liu, Enyu Zhao, Zhiyuan Liu, Andrew Feng, Scott John Easley

In this paper, we propose an Instant Photorealistic Style Transfer (IPST) approach, designed to achieve instant photorealistic style transfer on super-resolution inputs without the need for pre-training on pair-wise datasets or imposing extra constraints.

Style Transfer Super-Resolution

Text Matching Improves Sequential Recommendation by Reducing Popularity Biases

1 code implementation27 Aug 2023 Zhenghao Liu, Sen Mei, Chenyan Xiong, Xiaohua LI, Shi Yu, Zhiyuan Liu, Yu Gu, Ge Yu

TASTE alleviates the cold start problem by representing long-tail items using full-text modeling and bringing the benefits of pretrained language models to recommendation systems.

Sequential Recommendation Text Matching

Rational Decision-Making Agent with Internalized Utility Judgment

no code implementations24 Aug 2023 Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun

Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision.

Decision Making Language Modelling +1

Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages

2 code implementations23 Aug 2023 Jinyi Hu, Yuan YAO, Chongyi Wang, Shan Wang, Yinxu Pan, Qianyu Chen, Tianyu Yu, Hanghao Wu, Yue Zhao, Haoye Zhang, Xu Han, Yankai Lin, Jiao Xue, Dahai Li, Zhiyuan Liu, Maosong Sun

Building a competitive counterpart in other languages is highly challenging due to the low-resource nature of non-English multimodal data (i. e., lack of large-scale, high-quality image-text data).

Language Modelling Large Language Model +1

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors

1 code implementation21 Aug 2023 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks.

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

1 code implementation14 Aug 2023 Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu

Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost.

Text Generation

LLM As DBA

1 code implementation10 Aug 2023 Xuanhe Zhou, Guoliang Li, Zhiyuan Liu

Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability.

Exploring Format Consistency for Instruction Tuning

1 code implementation28 Jul 2023 Shihao Liang, Runchu Tian, Kunlun Zhu, Yujia Qin, Huadong Wang, Xin Cong, Zhiyuan Liu, Xiaojiang Liu, Maosong Sun

Instruction tuning has emerged as a promising approach to enhancing large language models in following human instructions.

Denoising

Communicative Agents for Software Development

1 code implementation16 Jul 2023 Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun

At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting.

Decision Making

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

no code implementations15 Jul 2023 Weilin Zhao, Yuxiang Huang, Xu Han, Zhiyuan Liu, Zhengyan Zhang, Maosong Sun

Parameter-efficient tuning (PET) has been widely explored in recent years because it tunes much fewer parameters (PET modules) than full-parameter fine-tuning (FT) while still stimulating sufficient knowledge from large language models (LLMs) for downstream tasks.

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models

1 code implementation5 Jul 2023 Shengding Hu, Ning Ding, Weilin Zhao, Xingtai Lv, Zhen Zhang, Zhiyuan Liu, Maosong Sun

The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning.

Won't Get Fooled Again: Answering Questions with False Premises

1 code implementation5 Jul 2023 Shengding Hu, Yifan Luo, Huadong Wang, Xingyi Cheng, Zhiyuan Liu, Maosong Sun

In this paper, we find that the PLMs already possess the knowledge required to rebut such questions, and the key is how to activate the knowledge.

Question Answering

Automatic Truss Design with Reinforcement Learning

1 code implementation27 Jun 2023 Weihua Du, Jinglun Zhao, Chao Yu, Xingcheng Yao, Zimeng Song, Siyang Wu, Ruifeng Luo, Zhiyuan Liu, Xianzhong Zhao, Yi Wu

Directly applying end-to-end reinforcement learning (RL) methods to truss layout design is infeasible either, since only a tiny portion of the entire layout space is valid under the physical constraints, leading to particularly sparse rewards for RL training.

Combinatorial Optimization Layout Design +3

Interactive Molecular Discovery with Natural Language

1 code implementation21 Jun 2023 Zheni Zeng, Bangchen Yin, Shipeng Wang, Jiarui Liu, Cheng Yang, Haishen Yao, Xingzhi Sun, Maosong Sun, Guotong Xie, Zhiyuan Liu

Natural language is expected to be a key medium for various human-machine interactions in the era of large language models.

Property Prediction

The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation

1 code implementation12 Jun 2023 Hao Peng, Xiaozhi Wang, Feng Yao, Kaisheng Zeng, Lei Hou, Juanzi Li, Zhiyuan Liu, Weixing Shen

In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers.

Event Argument Extraction Event Detection +1

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning

1 code implementation4 Jun 2023 Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Xingzhi Sun, Guotong Xie, Zhiyuan Liu, Maosong Sun

Our investigations reveal that model scaling (1) mitigates the effects of the positions of tunable parameters on performance, and (2) enables tuning methods to achieve performance comparable to full-parameter fine-tuning by optimizing fewer tunable parameters.

Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data

1 code implementation31 May 2023 Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, Ge Yu

SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining.

Code Search Language Modelling +1

Exploring Lottery Prompts for Pre-trained Language Models

no code implementations31 May 2023 Yulin Chen, Ning Ding, Xiaobin Wang, Shengding Hu, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning.

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

1 code implementation29 May 2023 Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji

In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework.

Adversarial Attack

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

Plug-and-Play Document Modules for Pre-trained Models

1 code implementation28 May 2023 Chaojun Xiao, Zhengyan Zhang, Xu Han, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Xiangyang Li, Zhonghua Li, Zhao Cao, Maosong Sun

By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders.

Question Answering

Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning

1 code implementation28 May 2023 Weize Chen, Xu Han, Yankai Lin, Zhiyuan Liu, Maosong Sun, Jie zhou

Since it is non-trivial to directly model the intermediate states and design a running cost function, we propose to use latent stochastic bridges to regularize the intermediate states and use the regularization as the running cost of PETs.

Plug-and-Play Knowledge Injection for Pre-trained Language Models

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.

Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In

1 code implementation27 May 2023 Zichun Yu, Chenyan Xiong, Shi Yu, Zhiyuan Liu

Retrieval augmentation can aid language models (LMs) in knowledge-intensive tasks by supplying them with external information.

Retrieval Zero-shot Generalization

Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval

no code implementations24 May 2023 Shi Yu, Chenghao Fan, Chenyan Xiong, David Jin, Zhiyuan Liu, Zhenghao Liu

Common IR pipelines are typically cascade systems that may involve multiple rankers and/or fusion models to integrate different information step-by-step.

Document Ranking Information Retrieval +2

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations

1 code implementation23 May 2023 Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, BoWen Zhou

Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT.

CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models

2 code implementations23 May 2023 Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji

Additionally, we introduce the Creation Challenge dataset, featuring 2K diverse questions, to emphasize the necessity and benefits of LLMs' tool creation ability.

2k Math +1

Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with Images as Pivots

no code implementations19 May 2023 Jinyi Hu, Xu Han, Xiaoyuan Yi, Yutong Chen, Wenhao Li, Zhiyuan Liu, Maosong Sun

IAP optimizes only a separate Chinese text encoder with all other parameters fixed to align Chinese semantics space to the English one in CLIP.

Cross-Lingual Transfer Image Generation

Recyclable Tuning for Continual Pre-training

1 code implementation15 May 2023 Yujia Qin, Cheng Qian, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

In pilot studies, we find that after continual pre-training, the upgraded PLM remains compatible with the outdated adapted weights to some extent.

Revisiting Graph Contrastive Learning for Anomaly Detection

no code implementations4 May 2023 Zhiyuan Liu, Chunjie Cao, Fangjian Tao, Jingzhang Sun

In the paper, we delve into the misconception and propose Multi-GNN and Augmented Graph contrastive framework MAG, which unified the existing GCAD methods in the contrastive self-supervised perspective.

Anomaly Detection Attribute +1

VPGTrans: Transfer Visual Prompt Generator across LLMs

1 code implementation NeurIPS 2023 Ao Zhang, Hao Fei, Yuan YAO, Wei Ji, Li Li, Zhiyuan Liu, Tat-Seng Chua

While developing a new multimodal LLM (MLLM) by pre-training on tremendous image-text pairs from scratch can be exceedingly resource-consuming, connecting an existing LLM with a comparatively lightweight visual prompt generator (VPG) becomes a feasible paradigm.

Transfer Learning

Rethinking Dense Retrieval's Few-Shot Ability

1 code implementation12 Apr 2023 Si Sun, Yida Lu, Shi Yu, Xiangyang Li, Zhonghua Li, Zhao Cao, Zhiyuan Liu, Deiming Ye, Jie Bao

Moreover, the dataset is disjointed into base and novel classes, allowing DR models to be continuously trained on ample data from base classes and a few samples in novel classes.

Retrieval

Language-Specific Representation of Emotion-Concept Knowledge Causally Supports Emotion Inference

1 code implementation19 Feb 2023 Ming Li, Yusheng Su, Hsiu-Yuan Huang, Jiali Cheng, Xin Hu, Xinmiao Zhang, Huadong Wang, Yujia Qin, Xiaozhi Wang, Kristen A. Lindquist, Zhiyuan Liu, Dan Zhang

Humans no doubt use language to communicate about their emotional experiences, but does language in turn help humans understand emotions, or is language just a vehicle of communication?

Attribute Language Modelling

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

1 code implementation14 Feb 2023 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun

In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises.

Data Augmentation Fairness +2

Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data

no code implementations10 Feb 2023 Lan Fu, Zhiyuan Liu, Jinlong Li, Jeff Simmons, Hongkai Yu, Song Wang

Accurate detection of large-scale, elliptical-shape fibers, including their parameters of center, orientation and major/minor axes, on the 2D cross-sectioned image slices is very important for characterizing the underlying cylinder 3D structures in microscopic material images.

Domain Adaptation

Decoder Tuning: Efficient Language Understanding as Decoding

2 code implementations16 Dec 2022 Ganqu Cui, Wentao Li, Ning Ding, Longtao Huang, Zhiyuan Liu, Maosong Sun

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting.

Natural Language Understanding

Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information

no code implementations11 Dec 2022 Zhiyuan Liu, Chunjie Cao, Jingzhang Sun

For a more comprehensive conclusion, we further investigate the effect of the objective function and the number of fused views on detection performance.

Graph Anomaly Detection

Visually Grounded Commonsense Knowledge Acquisition

1 code implementation22 Nov 2022 Yuan YAO, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Hai-Tao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun

In this work, we present CLEVER, which formulates CKE as a distantly supervised multi-instance learning problem, where models learn to summarize commonsense relations from a bag of images about an entity pair without any human annotation on image instances.

Language Modelling

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Event Relation Extraction Relation +1

Finding Skill Neurons in Pre-trained Transformer-based Language Models

1 code implementation14 Nov 2022 Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, Juanzi Li

Furthermore, we demonstrate the skill neurons are most likely generated in pre-training rather than fine-tuning by showing that the skill neurons found with prompt tuning are also crucial for other fine-tuning methods freezing neuron weights, such as the adapter-based tuning and BitFit.

Network Pruning

FPT: Improving Prompt Tuning Efficiency via Progressive Training

1 code implementation13 Nov 2022 Yufei Huang, Yujia Qin, Huadong Wang, Yichun Yin, Maosong Sun, Zhiyuan Liu, Qun Liu

Inspired by these observations, we propose Fast Prompt Tuning (FPT), which starts by conducting PT using a small-scale partial PLM, and then progressively expands its depth and width until the full-model size.

Few-shot Classification with Hypersphere Modeling of Prototypes

no code implementations10 Nov 2022 Ning Ding, Yulin Chen, Ganqu Cui, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere.

Classification Few-Shot Learning +1

Sparse Structure Search for Delta Tuning

1 code implementation NIPS 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

Generally, DT methods exquisitely design delta modules (DT modules) which could be applied to arbitrary fine-grained positions inside PTMs.

Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives

1 code implementation31 Oct 2022 Si Sun, Chenyan Xiong, Yue Yu, Arnold Overwijk, Zhiyuan Liu, Jie Bao

In this paper, we investigate the instability in the standard dense retrieval training, which iterates between model training and hard negative selection using the being-trained model.

Retrieval

A Close Look into the Calibration of Pre-trained Language Models

2 code implementations31 Oct 2022 Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, Heng Ji

We observe a consistent change in calibration performance across six factors.

Exploring Mode Connectivity for Pre-trained Language Models

1 code implementation25 Oct 2022 Yujia Qin, Cheng Qian, Jing Yi, Weize Chen, Yankai Lin, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

(3) How does the PLM's task knowledge change along the path connecting two minima?

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

1 code implementation24 Oct 2022 Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

To fathom the mystery, we hypothesize that the adaptations of different DETs could all be reparameterized as low-dimensional optimizations in a unified optimization subspace, which could be found by jointly decomposing independent solutions of different DETs.

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

1 code implementation19 Oct 2022 Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, Maosong Sun

We discuss the deficiencies in previous work and propose our suggestions that the research on the Security-oriented adversarial NLP (SoadNLP) should: (1) evaluate their methods on security tasks to demonstrate the real-world concerns; (2) consider real-world attackers' goals, instead of developing impractical methods.

Data Augmentation

Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models

1 code implementation COLING 2022 Zichun Yu, Tianyu Gao, Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Maosong Sun, Jie zhou

Prompting, which casts downstream applications as language modeling tasks, has shown to be sample efficient compared to standard fine-tuning with pre-trained models.

Few-Shot Learning Language Modelling +1

Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval

1 code implementation1 Sep 2022 Zhenghao Liu, Chenyan Xiong, Yuanhuiyi Lv, Zhiyuan Liu, Ge Yu

To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space.

Image Retrieval Open-Domain Question Answering +2

Improving Task Generalization via Unified Schema Prompt

no code implementations5 Aug 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Task generalization has been a long standing challenge in Natural Language Processing (NLP).

Effective Few-Shot Named Entity Linking by Meta-Learning

1 code implementation12 Jul 2022 Xiuxing Li, Zhenyu Li, Zhengyan Zhang, Ning Liu, Haitao Yuan, Wei zhang, Zhiyuan Liu, Jianyong Wang

In this paper, we endeavor to solve the problem of few-shot entity linking, which only requires a minimal amount of in-domain labeled data and is more practical in real situations.

Entity Linking Knowledge Base Completion +2

GACT: Activation Compressed Training for Generic Network Architectures

1 code implementation22 Jun 2022 Xiaoxuan Liu, Lianmin Zheng, Dequan Wang, Yukuo Cen, Weize Chen, Xu Han, Jianfei Chen, Zhiyuan Liu, Jie Tang, Joey Gonzalez, Michael Mahoney, Alvin Cheung

Training large neural network (NN) models requires extensive memory resources, and Activation Compressed Training (ACT) is a promising approach to reduce training memory footprint.

A Unified Understanding of Deep NLP Models for Text Classification

no code implementations19 Jun 2022 Zhen Li, Xiting Wang, Weikai Yang, Jing Wu, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun, HUI ZHANG, Shixia Liu

The rapid development of deep natural language processing (NLP) models for text classification has led to an urgent need for a unified understanding of these models proposed individually.

text-classification Text Classification

A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

1 code implementation17 Jun 2022 Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, Maosong Sun

However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e. g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving.

text similarity

Sparse Structure Search for Parameter-Efficient Tuning

no code implementations15 Jun 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

The searched structures preserve more than 99\% fine-tuning performance with 0. 01\% trainable parameters.

Prompt Tuning for Discriminative Pre-trained Language Models

1 code implementation Findings (ACL) 2022 Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang

Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.

Language Modelling Question Answering +2

PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models

1 code implementation23 May 2022 Yuan YAO, Qianyu Chen, Ao Zhang, Wei Ji, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun

We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs.

Language Modelling Object +7

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

1 code implementation NAACL 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

Continual Learning Few-Shot Learning +2

Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder

1 code implementation6 May 2022 Zhenghao Liu, Han Zhang, Chenyan Xiong, Zhiyuan Liu, Yu Gu, Xiaohua LI

These embeddings need to be high-dimensional to fit training signals and guarantee the retrieval effectiveness of dense retrievers.

Dimensionality Reduction Information Retrieval +1

P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

1 code implementation4 May 2022 Xiaomeng Hu, Shi Yu, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu, Ge Yu

In this paper, we identify and study the two mismatches between pre-training and ranking fine-tuning: the training schema gap regarding the differences in training objectives and model architectures, and the task knowledge gap considering the discrepancy between the knowledge needed in ranking and that learned during pre-training.

Exploring the Universal Vulnerability of Prompt-based Learning Paradigm

1 code implementation Findings (NAACL) 2022 Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Zhiyuan Liu

Prompt-based learning paradigm bridges the gap between pre-training and fine-tuning, and works effectively under the few-shot setting.

Prototypical Verbalizer for Prompt-based Few-shot Tuning

1 code implementation ACL 2022 Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, Zhiyuan Liu

However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging. In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data.

Contrastive Learning Entity Typing +2

LEVEN: A Large-Scale Chinese Legal Event Detection Dataset

1 code implementation Findings (ACL) 2022 Feng Yao, Chaojun Xiao, Xiaozhi Wang, Zhiyuan Liu, Lei Hou, Cunchao Tu, Juanzi Li, Yun Liu, Weixing Shen, Maosong Sun

However, existing Legal Event Detection (LED) datasets only concern incomprehensive event types and have limited annotated data, which restricts the development of LED methods and their downstream applications.

Event Detection Retrieval

A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language Models

1 code implementation ACL 2022 Deming Ye, Yankai Lin, Peng Li, Maosong Sun, Zhiyuan Liu

Pre-trained language models (PLMs) cannot well recall rich factual knowledge of entities exhibited in large-scale corpora, especially those rare entities.

Domain Adaptation

QuoteR: A Benchmark of Quote Recommendation for Writing

1 code implementation ACL 2022 Fanchao Qi, Yanhui Yang, Jing Yi, Zhili Cheng, Zhiyuan Liu, Maosong Sun

To facilitate the research on this task, we build a large and fully open quote recommendation dataset called QuoteR, which comprises three parts including English, standard Chinese and classical Chinese.

Training Free Graph Neural Networks for Graph Matching

1 code implementation14 Jan 2022 Zhiyuan Liu, Yixin Cao, Fuli Feng, Xiang Wang, Jie Tang, Kenji Kawaguchi, Tat-Seng Chua

We present a framework of Training Free Graph Matching (TFGM) to boost the performance of Graph Neural Networks (GNNs) based graph matching, providing a fast promising solution without training (training-free).

Entity Alignment Graph Matching +1

Video as Conditional Graph Hierarchy for Multi-Granular Question Answering

1 code implementation12 Dec 2021 Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, Tat-Seng Chua

To align with the multi-granular essence of linguistic concepts in language queries, we propose to model video as a conditional graph hierarchy which weaves together visual facts of different granularity in a level-wise manner, with the guidance of corresponding textual cues.

Question Answering Video Question Answering +1

Multi-modal application: Image Memes Generation

no code implementations3 Dec 2021 Zhiyuan Liu, Chuanzheng Sun, Yuxin Jiang, Shiqi Jiang, Mei Ming

An Internet meme commonly takes the form of an image and is created by combining a meme template (image) and a caption (natural language sentence).

Cultural Vocal Bursts Intensity Prediction Sentence

On Transferability of Prompt Tuning for Natural Language Processing

1 code implementation NAACL 2022 Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie zhou

To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work.

Natural Language Understanding Transfer Learning

OpenPrompt: An Open-source Framework for Prompt-learning

2 code implementations ACL 2022 Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun

Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $cloze$-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks.

Exploring Universal Intrinsic Task Subspace via Prompt Tuning

1 code implementation15 Oct 2021 Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou

In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.

Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks

1 code implementation15 Oct 2021 Yangyi Chen, Fanchao Qi, Hongcheng Gao, Zhiyuan Liu, Maosong Sun

In this paper, we find two simple tricks that can make existing textual backdoor attacks much more harmful.

Vocal Bursts Valence Prediction

Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer

1 code implementation EMNLP 2021 Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, Maosong Sun

In this paper, we make the first attempt to conduct adversarial and backdoor attacks based on text style transfer, which is aimed at altering the style of a sentence while preserving its meaning.

Backdoor Attack Sentence +2

bert2BERT: Towards Reusable Pretrained Language Models

no code implementations ACL 2022 Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, Qun Liu

However, large language model pre-training costs intensive computational resources and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful.

Language Modelling Large Language Model

Program Transfer for Answering Complex Questions over Knowledge Bases

1 code implementation ACL 2022 Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, Jinghui Xiao

In this paper, we propose the approach of program transfer, which aims to leverage the valuable program annotations on the rich-resourced KBs as external supervision signals to aid program induction for the low-resourced KBs that lack program annotations.

Program induction Semantic Parsing

Few-shot Learning with Big Prototypes

no code implementations29 Sep 2021 Ning Ding, Yulin Chen, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

A big prototype could be effectively modeled by two sets of learnable parameters, one is the center of the hypersphere, which is an embedding with the same dimension of training examples.

Few-Shot Learning

CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models

1 code implementation24 Sep 2021 Yuan YAO, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun

Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks.

Visual Grounding

PPT: Pre-trained Prompt Tuning for Few-shot Learning

1 code implementation ACL 2022 Yuxian Gu, Xu Han, Zhiyuan Liu, Minlie Huang

To ensure the generalization of PPT, we formulate similar classification tasks into a unified task form and pre-train soft prompts for this unified task.

Attribute Few-Shot Learning

Non-Euclidean Analysis of Joint Variations in Multi-Object Shapes

no code implementations6 Sep 2021 Zhiyuan Liu, Jörn Schulz, Mohsen Taheri, Martin Styner, James Damon, Stephen Pizer, J. S. Marron

This paper considers joint analysis of multiple functionally related structures in classification tasks.

Prompt-Learning for Fine-Grained Entity Typing

no code implementations24 Aug 2021 Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, Hong-Gee Kim

In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.

Entity Typing Knowledge Probing +5

More Robust Dense Retrieval with Contrastive Dual Learning

1 code implementation16 Jul 2021 Yizhi Li, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu

With contrastive learning, the dual training object of DANCE learns more tailored representations for queries and documents to keep the embedding space smooth and uniform, thriving on the ranking performance of DANCE on the MS MARCO document retrieval task.

Contrastive Learning Information Retrieval +2

CPM-2: Large-scale Cost-effective Pre-trained Language Models

2 code implementations20 Jun 2021 Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan YAO, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, Maosong Sun

We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference.

Evaluating Modules in Graph Contrastive Learning

1 code implementation15 Jun 2021 Ganqu Cui, Yufeng Du, Cheng Yang, Jie zhou, Liang Xu, Xing Zhou, Xingyi Cheng, Zhiyuan Liu

The recent emergence of contrastive learning approaches facilitates the application on graph representation learning (GRL), introducing graph contrastive learning (GCL) into the literature.

Contrastive Learning Graph Classification +1

Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution

1 code implementation ACL 2021 Fanchao Qi, Yuan YAO, Sophia Xu, Zhiyuan Liu, Maosong Sun

Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks.

Open Hierarchical Relation Extraction

1 code implementation NAACL 2021 Kai Zhang, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To establish the bidirectional connections between OpenRE and relation hierarchy, we propose the task of open hierarchical relation extraction and present a novel OHRE framework for the task.

Clustering Relation +1

Sub-Character Tokenization for Chinese Pretrained Language Models

2 code implementations1 Jun 2021 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to homophone typos.

Chinese Word Segmentation Computational Efficiency +2

Fully Hyperbolic Neural Networks

1 code implementation ACL 2022 Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Hyperbolic neural networks have shown great potential for modeling complex data.

Knowledge Inheritance for Pre-trained Language Models

2 code implementations NAACL 2022 Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.

Domain Adaptation Knowledge Distillation +2

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

2 code implementations ACL 2021 Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, Maosong Sun

As far as we know, almost all existing textual backdoor attack methods insert additional contents into normal samples as triggers, which causes the trigger-embedded samples to be detected and the backdoor attacks to be blocked without much effort.

Backdoor Attack

Automatic Construction of Sememe Knowledge Bases via Dictionaries

1 code implementation Findings (ACL) 2021 Fanchao Qi, Yangyi Chen, Fengyu Wang, Zhiyuan Liu, Xiao Chen, Maosong Sun

We use this method to build an English SKB and a French SKB, and conduct comprehensive evaluations from both intrinsic and extrinsic perspectives.

PTR: Prompt Tuning with Rules for Text Classification

1 code implementation24 May 2021 Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun

This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.

Natural Language Inference Relation Classification +4

Few-NERD: A Few-Shot Named Entity Recognition Dataset

7 code implementations ACL 2021 Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu

In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types.

Few-shot NER Named Entity Recognition

Few-Shot Conversational Dense Retrieval

1 code implementation10 May 2021 Shi Yu, Zhenghao Liu, Chenyan Xiong, Tao Feng, Zhiyuan Liu

In this paper, we present a Conversational Dense Retrieval system, ConvDR, that learns contextualized embeddings for multi-turn conversational queries and retrieves documents solely using embedding dot products.

Conversational Search Retrieval

Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents

1 code implementation9 May 2021 Chaojun Xiao, Xueyu Hu, Zhiyuan Liu, Cunchao Tu, Maosong Sun

Legal artificial intelligence (LegalAI) aims to benefit legal systems with the technology of artificial intelligence, especially natural language processing (NLP).

Language Modelling Question Answering +2

Is Multi-Hop Reasoning Really Explainable? Towards Benchmarking Reasoning Interpretability

1 code implementation EMNLP 2021 Xin Lv, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Yichi Zhang, Zelin Dai

However, we find in experiments that many paths given by these models are actually unreasonable, while little works have been done on interpretability evaluation for them.

Benchmarking Link Prediction

Incentivizing Exploration in Linear Bandits under Information Gap

no code implementations8 Apr 2021 Huazheng Wang, Haifeng Xu, Chuanhao Li, Zhiyuan Liu, Hongning Wang

We study the problem of incentivizing exploration for myopic users in linear bandits, where the users tend to exploit arm with the highest predicted reward instead of exploring.

Visual Distant Supervision for Scene Graph Generation

1 code implementation ICCV 2021 Yuan YAO, Ao Zhang, Xu Han, Mengdi Li, Cornelius Weber, Zhiyuan Liu, Stefan Wermter, Maosong Sun

In this work, we propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.

Graph Generation Predicate Classification +2

Equality before the Law: Legal Judgment Consistency Analysis for Fairness

no code implementations25 Mar 2021 Yuzhong Wang, Chaojun Xiao, Shirong Ma, Haoxi Zhong, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun

We propose to simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.

Fairness

UPRec: User-Aware Pre-training for Recommender Systems

no code implementations22 Feb 2021 Chaojun Xiao, Ruobing Xie, Yuan YAO, Zhiyuan Liu, Maosong Sun, Xu Zhang, Leyu Lin

Existing sequential recommendation methods rely on large amounts of training data and usually suffer from the data sparsity problem.

Self-Supervised Learning Sequential Recommendation

Representation Learning for Natural Language Processing

no code implementations7 Feb 2021 Zhiyuan Liu, Yankai Lin, Maosong Sun

This book aims to review and present the recent advances of distributed representation learning for NLP, including why representation learning can improve NLP, how representation learning takes part in various important topics of NLP, and what challenges are still not well addressed by distributed representation.

Representation Learning

CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of Pre-trained Language Models

1 code implementation7 Feb 2021 Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li, Jie zhou, Maosong Sun

We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.

OpenMatch: An Open Source Library for Neu-IR Research

1 code implementation30 Jan 2021 Zhenghao Liu, Kaitao Zhang, Chenyan Xiong, Zhiyuan Liu, Maosong Sun

OpenMatch is a Python-based library that serves for Neural Information Retrieval (Neu-IR) research.

Document Ranking Information Retrieval +1

Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks

1 code implementation ICML Workshop AML 2021 Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, Maosong Sun

In this work, we demonstrate the universal vulnerability of PTMs, where fine-tuned PTMs can be easily controlled by backdoor attacks in arbitrary downstream tasks.

Backdoor Attack

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

1 code implementation31 Dec 2020 Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).

Adversarial Robustness Text Augmentation +2

Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

1 code implementation ACL 2021 Si Sun, Yingzhuo Qian, Zhenghao Liu, Chenyan Xiong, Kaitao Zhang, Jie Bao, Zhiyuan Liu, Paul Bennett

To democratize the benefits of Neu-IR, this paper presents MetaAdaptRank, a domain adaptive learning method that generalizes Neu-IR models from label-rich source domains to few-shot target domains.

Information Retrieval Learning-To-Rank +1

Try to Substitute: An Unsupervised Chinese Word Sense Disambiguation Method Based on HowNet

1 code implementation COLING 2020 Bairu Hou, Fanchao Qi, Yuan Zang, Xurui Zhang, Zhiyuan Liu, Maosong Sun

In this paper, we propose a new unsupervised method for HowNet-based Chinese WSD, which exploits the masked language model task of pre-trained language models.

Language Modelling Word Sense Disambiguation

Neural Gibbs Sampling for Joint Event Argument Extraction

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Xiaozhi Wang, Shengyu Jia, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Jie zhou

Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles.

Event Argument Extraction Event Extraction

TLab: Traffic Map Movie Forecasting Based on HR-NET

no code implementations13 Nov 2020 Fanyou Wu, Yang Liu, Zhiyuan Liu, Xiaobo Qu, Rado Gazo, Eva Haviarova

In our 2020 Competition solution, we further design multiple variants based on HR-NET and UNet.

Feature Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.