Search Results for author: Gongshen Liu

Found 24 papers, 8 papers with code

Improving Constituent Representation with Hypertree Neural Networks

no code implementations NAACL 2022 Hao Zhou, Gongshen Liu, Kewei Tu

Many natural language processing tasks involve text spans and thus high-quality span representations are needed to enhance neural approaches to these tasks.

Sentence

A Multi-Task Dual-Tree Network for Aspect Sentiment Triplet Extraction

no code implementations COLING 2022 Yichun Zhao, Kui Meng, Gongshen Liu, Jintao Du, Huijia Zhu

Aspect Sentiment Triplet Extraction (ASTE) aims at extracting triplets from a given sentence, where each triplet includes an aspect, its sentiment polarity, and a corresponding opinion explaining the polarity.

Aspect Sentiment Triplet Extraction Sentence

MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network

no code implementations7 Mar 2024 Pengzhou Cheng, Zongru Wu, Gongshen Liu

The STcAM with fine-pruning uses one-dimensional convolution (Conv1D) to extract spatial features and subsequently utilizes the Bidirectional Long Short Term Memory (Bi-LSTM) to extract the temporal features, where the attention mechanism will focus on the important time steps.

Intrusion Detection Knowledge Distillation +2

Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models

no code implementations29 Feb 2024 Pengzhou Cheng, Wei Du, Zongru Wu, Fengwei Zhang, Libo Chen, Gongshen Liu

Specifically, the method hostilely manipulates poisoned samples with different predefined syntactic structures as stealth triggers and then implants the backdoor to pre-trained representation space without disturbing the primitive knowledge.

Contrastive Learning Natural Language Understanding

How Large Language Models Encode Context Knowledge? A Layer-Wise Probing Study

1 code implementation25 Feb 2024 Tianjie Ju, Weiwei Sun, Wei Du, Xinwei Yuan, Zhaochun Ren, Gongshen Liu

Previous work has showcased the intriguing capability of large language models (LLMs) in retrieving facts and processing context knowledge.

Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space

no code implementations19 Feb 2024 Zongru Wu, Zhuosheng Zhang, Pengzhou Cheng, Gongshen Liu

In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis.

Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models

no code implementations19 Feb 2024 Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, Gongshen Liu

Recent work has showcased the powerful capability of large language models (LLMs) in recalling knowledge and reasoning.

knowledge editing

Improving Non-autoregressive Machine Translation with Error Exposure and Consistency Regularization

no code implementations15 Feb 2024 Xinran Chen, Sufeng Duan, Gongshen Liu

Being one of the IR-NAT (Iterative-refinemennt-based NAT) frameworks, the Conditional Masked Language Model (CMLM) adopts the mask-predict paradigm to re-predict the masked low-confidence tokens.

Language Modelling Machine Translation +1

An Enhanced Prompt-Based LLM Reasoning Scheme via Knowledge Graph-Integrated Collaboration

no code implementations7 Feb 2024 Yihao Li, Ru Zhang, Jianyi Liu, Gongshen Liu

While Large Language Models (LLMs) demonstrate exceptional performance in a multitude of Natural Language Processing (NLP) tasks, they encounter challenges in practical applications, including issues with hallucinations, inadequate knowledge updating, and limited transparency in the reasoning process.

R-Judge: Benchmarking Safety Risk Awareness for LLM Agents

1 code implementation18 Jan 2024 Tongxin Yuan, Zhiwei He, Lingzhong Dong, Yiming Wang, Ruijie Zhao, Tian Xia, Lizhen Xu, Binglin Zhou, Fangqi Li, Zhuosheng Zhang, Rui Wang, Gongshen Liu

We introduce R-Judge, a benchmark crafted to evaluate the proficiency of LLMs in judging and identifying safety risks given agent interaction records.

Benchmarking

Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents

1 code implementation20 Nov 2023 Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, Hai Zhao

Large language models (LLMs) have dramatically enhanced the field of language intelligence, as demonstrably evidenced by their formidable empirical performance across a spectrum of complex reasoning tasks.

UOR: Universal Backdoor Attacks on Pre-trained Language Models

no code implementations16 May 2023 Wei Du, Peixuan Li, Boqun Li, Haodong Zhao, Gongshen Liu

In this paper, we first summarize the requirements that a more threatening backdoor attack against PLMs should satisfy, and then propose a new backdoor attack method called UOR, which breaks the bottleneck of the previous approach by turning manual selection into automatic optimization.

Backdoor Attack Contrastive Learning +2

FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning

no code implementations25 Aug 2022 Haodong Zhao, Wei Du, Fangqi Li, Peixuan Li, Gongshen Liu

In this paper, we propose "FedPrompt" to study prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0. 01% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution.

Backdoor Attack Data Poisoning +2

Few-Shot Table-to-Text Generation with Prefix-Controlled Generator

no code implementations COLING 2022 Yutao Luo, Menghua Lu, Gongshen Liu, Shilin Wang

To alleviate these problems, we propose a prompt-based approach, Prefix-Controlled Generator (i. e., PCG), for few-shot table-to-text generation.

Table-to-Text Generation

ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation

1 code implementation12 Aug 2020 Hanwen Cao, Yongyi Lu, Cewu Lu, Bo Pang, Gongshen Liu, Alan Yuille

In this paper, we further improve spatio-temporal point cloud feature learning with a flexible module called ASAP considering both attention and structure information across frames, which we find as two important factors for successful segmentation in dynamic point clouds.

Segmentation

MoTiAC: Multi-Objective Actor-Critics for Real-Time Bidding

no code implementations18 Feb 2020 Haolin Zhou, Chaoqi Yang, Xiaofeng Gao, Qiong Chen, Gongshen Liu, Guihai Chen

Online Real-Time Bidding (RTB) is a complex auction game among which advertisers struggle to bid for ad impressions when a user request occurs.

Reinforcement Learning (RL)

Multiple Character Embeddings for Chinese Word Segmentation

no code implementations ACL 2019 Jingkang Wang, Jianing Zhou, Jie zhou, Gongshen Liu

Chinese word segmentation (CWS) is often regarded as a character-based sequence labeling task in most current works which have achieved great success with the help of powerful neural networks.

Chinese Word Segmentation

Sliced Recurrent Neural Networks

3 code implementations COLING 2018 Zeping Yu, Gongshen Liu

In this paper, we introduce sliced recurrent neural networks (SRNNs), which could be parallelized by slicing the sequences into many subsequences.

Sentiment Analysis

Modeling Multi-turn Conversation with Deep Utterance Aggregation

1 code implementation COLING 2018 Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu

In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.

Conversational Response Selection Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.