Search Results for author: Linyang Li

Found 37 papers, 23 papers with code

Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples

no code implementations Findings (ACL) 2022 Jianhan Xu, Cenyuan Zhang, Xiaoqing Zheng, Linyang Li, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Most of the existing defense methods improve the adversarial robustness by making the models adapt to the training set augmented with some adversarial examples.

Adversarial Robustness

AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling

no code implementations19 Feb 2024 Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu

We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music.

Language Modelling Large Language Model

Turn Waste into Worth: Rectifying Top-$k$ Router of MoE

no code implementations17 Feb 2024 Zhiyuan Zeng, Qipeng Guo, Zhaoye Fei, Zhangyue Yin, Yunhua Zhou, Linyang Li, Tianxiang Sun, Hang Yan, Dahua Lin, Xipeng Qiu

To address the dropped tokens and padding, we propose the Rectify-Router, comprising the Intra-GPU Rectification and the Fill-in Rectification.

Computational Efficiency

InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning

1 code implementation9 Feb 2024 Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen, Dahua Lin

We further explore how to use LEAN to solve math problems and study its performance under the setting of multi-task learning which shows the possibility of using LEAN as a unified platform for solving and proving in math.

Data Augmentation GSM8K +3

Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora

no code implementations26 Jan 2024 Zhaoye Fei, Yunfan Shao, Linyang Li, Zhiyuan Zeng, Conghui He, Hang Yan, Dahua Lin, Xipeng Qiu

Large language models have demonstrated remarkable potential in various tasks, however, there remains a significant scarcity of open-source models and data for specific domains.

Language Modelling Large Language Model

Can AI Assistants Know What They Don't Know?

1 code implementation24 Jan 2024 Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu, Wenwei Zhang, Zhangyue Yin, ShiMin Li, Linyang Li, Zhengfu He, Kai Chen, Xipeng Qiu

To answer this question, we construct a model-specific "I don't know" (Idk) dataset for an assistant, which contains its known and unknown questions, based on existing open-domain question answering datasets.

Math Open-Domain Question Answering +1

InferAligner: Inference-Time Alignment for Harmlessness through Cross-Model Guidance

1 code implementation20 Jan 2024 Pengyu Wang, Dong Zhang, Linyang Li, Chenkun Tan, Xinghao Wang, Ke Ren, Botian Jiang, Xipeng Qiu

With the rapid development of large language models (LLMs), they are not only used as general-purpose AI assistants but are also customized through further fine-tuning to meet the requirements of different applications.

Super-Resolution on Rotationally Scanned Photoacoustic Microscopy Images Incorporating Scanning Prior

1 code implementation12 Dec 2023 Kai Pan, Linyang Li, Li Lin, Pujin Cheng, Junyan Lyu, Lei Xi, Xiaoyin Tang

Recently, there is a trend to incorporate deep learning into the scanning process to further increase the scanning speed. Yet, most such attempts are performed for raster scanning while those for rotational scanning are relatively rare.

Super-Resolution

LLatrieval: LLM-Verified Retrieval for Verifiable Generation

1 code implementation14 Nov 2023 Xiaonan Li, Changtai Zhu, Linyang Li, Zhangyue Yin, Tianxiang Sun, Xipeng Qiu

Thus, the LLM can iteratively provide feedback to retrieval and facilitate the retrieval result to fully support verifiable generation.

Language Modelling Large Language Model +1

Watermarking LLMs with Weight Quantization

1 code implementation17 Oct 2023 Linyang Li, Botian Jiang, Pengyu Wang, Ke Ren, Hang Yan, Xipeng Qiu

Abuse of large language models reveals high risks as large language models are being deployed at an astonishing speed.

Language Modelling Large Language Model +1

Character-LLM: A Trainable Agent for Role-Playing

1 code implementation16 Oct 2023 Yunfan Shao, Linyang Li, Junqi Dai, Xipeng Qiu

Large language models (LLMs) can be used to serve as agents to simulate human behaviors, given the powerful ability to understand human instructions and provide high-quality generated texts.

PerturbScore: Connecting Discrete and Continuous Perturbations in NLP

1 code implementation13 Oct 2023 Linyang Li, Ke Ren, Yunfan Shao, Pengyu Wang, Xipeng Qiu

Through experimental results, we find that we can build a connection between discrete and continuous perturbations and use the proposed PerturbScore to learn such correlation, surpassing previous methods used in discrete perturbation measuring.

Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System

no code implementations25 May 2023 ShiMin Li, Xiaotian Zhang, Yanjun Zheng, Linyang Li, Xipeng Qiu

Dialogue data in real scenarios tend to be sparsely available, rendering data-starved end-to-end dialogue systems trained inadequately.

Task-Oriented Dialogue Systems

Improving Contrastive Learning of Sentence Embeddings from AI Feedback

1 code implementation3 May 2023 Qinyuan Cheng, Xiaogui Yang, Tianxiang Sun, Linyang Li, Xipeng Qiu

Our method utilizes AI feedback from large pre-trained language models (LLMs) to construct sample pairs with fine-grained sample similarity scores to improve contrastive learning.

Contrastive Learning Data Augmentation +5

Origin Tracing and Detecting of LLMs

no code implementations27 Apr 2023 Linyang Li, Pengyu Wang, Ke Ren, Tianxiang Sun, Xipeng Qiu

The extraordinary performance of large language models (LLMs) heightens the importance of detecting whether the context is generated by an AI system.

Mitigating Negative Style Transfer in Hybrid Dialogue System

1 code implementation14 Dec 2022 ShiMin Li, Qinyuan Cheng, Linyang Li, Xipeng Qiu

As the functionality of dialogue systems evolves, hybrid dialogue systems that accomplish user-specific goals and participate in open-topic chitchat with users are attracting growing attention.

Contrastive Learning Style Transfer

Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with User Simulator

1 code implementation26 Oct 2022 Qinyuan Cheng, Linyang Li, Guofeng Quan, Feng Gao, Xiaofeng Mou, Xipeng Qiu

Besides, we introduce a sentence-level and a session-level score to measure the sentence fluency and session coherence in the interactive evaluation.

Sentence

Text Adversarial Purification as Defense against Adversarial Attacks

no code implementations27 Mar 2022 Linyang Li, Demin Song, Xipeng Qiu

Adversarial purification is a successful defense mechanism against adversarial attacks without requiring knowledge of the form of the incoming attack.

Adversarial Attack Adversarial Defense

KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier

1 code implementation6 Oct 2021 Linyang Li, Demin Song, Ruotian Ma, Xipeng Qiu, Xuanjing Huang

Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss, which might face robustness and stability problems.

Contrastive Learning text-classification +1

Template-free Prompt Tuning for Few-shot NER

1 code implementation NAACL 2022 Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, Xuanjing Huang

Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words.

Few-Shot Learning Few-shot NER +1

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

no code implementations EMNLP 2021 Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, Xipeng Qiu

\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers.

text-classification Text Classification

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

1 code implementation EMNLP 2021 Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.

Benchmarking

SENT: Sentence-level Distant Relation Extraction via Negative Training

1 code implementation ACL 2021 Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou, Xuanjing Huang

In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that ``the instance does not belong to these complementary labels".

Relation Relation Extraction +1

Certified Robustness to Text Adversarial Attacks by Randomized [MASK]

1 code implementation8 May 2021 Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, Xuanjing Huang

Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions.

COOKIE: Contrastive Cross-Modal Knowledge Sharing Pre-Training for Vision-Language Representation

1 code implementation ICCV 2021 Keyu Wen, Jin Xia, Yuanyuan Huang, Linyang Li, Jiayan Xu, Jie Shao

There are two key designs in it, one is the weight-sharing transformer on top of the visual and textual encoders to align text and image semantically, the other is three kinds of contrastive learning designed for sharing knowledge between different modalities.

Contrastive Learning Cross-Modal Retrieval +3

Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces

no code implementations29 Dec 2020 Linyang Li, Yunfan Shao, Demin Song, Xipeng Qiu, Xuanjing Huang

The substitutions in the generated adversarial examples are not characters or words but \textit{'pieces'}, which are more natural to Chinese readers.

Language Modelling Sentence

TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding

1 code implementation30 Apr 2020 Linyang Li, Xipeng Qiu

Gradient-based adversarial training is widely used in improving the robustness of neural networks, while it cannot be easily adapted to natural language processing tasks since the embedding space is discrete.

Natural Language Understanding text-classification +1

BERT-ATTACK: Adversarial Attack Against BERT Using BERT

4 code implementations EMNLP 2020 Linyang Li, Ruotian Ma, Qipeng Guo, xiangyang xue, Xipeng Qiu

Adversarial attacks for discrete data (such as texts) have been proved significantly more challenging than continuous data (such as images) since it is difficult to generate adversarial samples with gradient-based methods.

Adversarial Attack

Quantum anomalous Hall effect in stable 1T-YN$_2$ monolayer with a large nontrivial band gap and high Chern number

no code implementations6 Jul 2017 Xiangru Kong, Linyang Li, Ortwin Leenaerts, Weiyang Wang, Xiong-Jun Liu, François M. Peeters

The quantum anomalous Hall (QAH) effect is a topologically nontrivial phase, characterized by a non-zero Chern number defined in the bulk and chiral edge states in the boundary.

Materials Science

New Group V Elemental Bilayers: A Tunable Structure Model with 4,6,8-atom Rings

1 code implementation10 Mar 2017 Xiangru Kong, Linyang Li, Ortwin Leenaerts, Xiong-Jun Liu, François M. Peeters

Using first-principles calculations, we propose a series of new elemental bilayers with group V elements (Bi, Sb, As).

Materials Science

Cannot find the paper you are looking for? You can Submit a new open access paper.