Search Results for author: Linli Xu

Found 26 papers, 5 papers with code

Hierarchical Multi-label Text Classification with Horizontal and Vertical Category Correlations

no code implementations EMNLP 2021 Linli Xu, Sijie Teng, Ruoyu Zhao, Junliang Guo, Chi Xiao, Deqiang Jiang, Bo Ren

Hierarchical multi-label text classification (HMTC) deals with the challenging task where an instance can be assigned to multiple hierarchically structured categories at the same time.

Multi Label Text Classification Multi-Label Text Classification +1

Semantic-Preserving Abstractive Text Summarization with Siamese Generative Adversarial Net

no code implementations Findings (NAACL) 2022 Xin Sheng, Linli Xu, Yinlong Xu, Deqiang Jiang, Bo Ren

We propose a novel siamese generative adversarial net for abstractive text summarization (SSPGAN), which can preserve the main semantics of the source text.

Abstractive Text Summarization

CoCGAN: Contrastive Learning for Adversarial Category Text Generation

no code implementations COLING 2022 Xin Sheng, Linli Xu, Yinlong Xu, Changcun Bao, Huang Chen, Bo Ren

The discriminator of CoCGAN discriminates the authenticity of given samples and optimizes a contrastive learning objective to capture both more flexible data-to-class relations and data-to-data relations among training samples.

Contrastive Learning Text Generation

Communication-Efficient Distributed Learning with Local Immediate Error Compensation

no code implementations19 Feb 2024 Yifei Cheng, Li Shen, Linli Xu, Xun Qian, Shiwei Wu, Yiming Zhou, Tie Zhang, DaCheng Tao, Enhong Chen

However, existing compression methods either perform only unidirectional compression in one iteration with higher communication cost, or bidirectional compression with slower convergence rate.

Communication-Efficient Personalized Federated Learning for Speech-to-Text Tasks

no code implementations18 Jan 2024 Yichao Du, Zhirui Zhang, Linan Yue, Xu Huang, Yuqing Zhang, Tong Xu, Linli Xu, Enhong Chen

To protect privacy and meet legal regulations, federated learning (FL) has gained significant attention for training speech-to-text (S2T) systems, including automatic speech recognition (ASR) and speech translation (ST).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

DiffS2UT: A Semantic Preserving Diffusion Model for Textless Direct Speech-to-Speech Translation

no code implementations26 Oct 2023 Yongxin Zhu, Zhujin Gao, Xinyuan Zhou, Zhongyi Ye, Linli Xu

While Diffusion Generative Models have achieved great success on image generation tasks, how to efficiently and effectively incorporate them into speech generation especially translation tasks remains a non-trivial problem.

Image Generation Speech-to-Speech Translation +1

Multi-Grained Multimodal Interaction Network for Entity Linking

1 code implementation19 Jul 2023 Pengfei Luo, Tong Xu, Shiwei Wu, Chen Zhu, Linli Xu, Enhong Chen

Then, to derive the similarity matching score for each mention-entity pair, we device three interaction units to comprehensively explore the intra-modal interaction and inter-modal fusion among features of entities and mentions.

Contrastive Learning Descriptive +1

End-to-End Word-Level Pronunciation Assessment with MASK Pre-training

no code implementations5 Jun 2023 Yukang Liang, Kaitao Song, Shaoguang Mao, Huiqiang Jiang, Luna Qiu, Yuqing Yang, Dongsheng Li, Linli Xu, Lili Qiu

Pronunciation assessment is a major challenge in the computer-aided pronunciation training system, especially at the word (phoneme)-level.

Locate Then Generate: Bridging Vision and Language with Bounding Box for Scene-Text VQA

no code implementations4 Apr 2023 Yongxin Zhu, Zhen Liu, Yukang Liang, Xin Li, Hao liu, Changcun Bao, Linli Xu

Different to conventional STVQA models which take the linguistic semantics and visual semantics in scene text as two separate features, in this paper, we propose a paradigm of "Locate Then Generate" (LTG), which explicitly unifies this two semantics with the spatial bounding box as a bridge connecting them.

Answer Generation Language Modelling +3

Difformer: Empowering Diffusion Models on the Embedding Space for Text Generation

1 code implementation19 Dec 2022 Zhujin Gao, Junliang Guo, Xu Tan, Yongxin Zhu, Fang Zhang, Jiang Bian, Linli Xu

Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space.

Denoising Machine Translation +2

Sequence-to-Action: Grammatical Error Correction with Action Guided Sequence Generation

no code implementations22 May 2022 Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, Linli Xu

The task of Grammatical Error Correction (GEC) has received remarkable attention with wide applications in Natural Language Processing (NLP) in recent years.

Grammatical Error Correction Sentence

Towards Variable-Length Textual Adversarial Attacks

no code implementations16 Apr 2021 Junliang Guo, Zhirui Zhang, Linlin Zhang, Linli Xu, Boxing Chen, Enhong Chen, Weihua Luo

In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks.

Machine Translation Translation

Incorporating BERT into Parallel Sequence Decoding with Adapters

1 code implementation NeurIPS 2020 Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, Enhong Chen

Our framework is based on a parallel sequence decoding algorithm named Mask-Predict considering the bi-directional and conditional independent nature of BERT, and can be adapted to traditional autoregressive decoding easily.

Machine Translation Natural Language Understanding +2

Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation

no code implementations ACL 2020 Junliang Guo, Linli Xu, Enhong Chen

In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-autoregressive neural machine translation{\textasciitilde}(NAT).

Language Modelling Machine Translation +1

STL-SGD: Speeding Up Local SGD with Stagewise Communication Period

no code implementations11 Jun 2020 Shuheng Shen, Yifei Cheng, Jingchang Liu, Linli Xu

Distributed parallel stochastic gradient descent algorithms are workhorses for large scale machine learning tasks.

Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation

2 code implementations20 Nov 2019 Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu

Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models.

Machine Translation Translation

Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent

no code implementations28 Jun 2019 Shuheng Shen, Linli Xu, Jingchang Liu, Xianfeng Liang, Yifei Cheng

Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup.

Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input

no code implementations23 Dec 2018 Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu

Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models.

Machine Translation Sentence +2

Asynchronous Stochastic Composition Optimization with Variance Reduction

no code implementations15 Nov 2018 Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling

Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.

Management

Accelerating Stochastic Gradient Descent Using Antithetic Sampling

no code implementations7 Oct 2018 Jingchang Liu, Linli Xu

(Mini-batch) Stochastic Gradient Descent is a popular optimization method which has been applied to many machine learning applications.

Binary Classification General Classification

How Images Inspire Poems: Generating Classical Chinese Poetry from Images with Memory Networks

no code implementations8 Mar 2018 Linli Xu, Liang Jiang, Chuan Qin, Zhe Wang, Dongfang Du

Generating poetry from images is much more challenging than generating poetry from text, since images contain very rich visual information which cannot be described completely using several keywords, and a good poem should convey the image accurately.

Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective

2 code implementations11 Nov 2017 Junliang Guo, Linli Xu, Xunpeng Huang, Enhong Chen

In this paper, we take a matrix factorization perspective of network embedding, and incorporate structure, content and label information of the network simultaneously.

Link Prediction Network Embedding +1

Make Workers Work Harder: Decoupled Asynchronous Proximal Stochastic Gradient Descent

no code implementations21 May 2016 Yitan Li, Linli Xu, Xiaowei Zhong, Qing Ling

Asynchronous parallel optimization algorithms for solving large-scale machine learning problems have drawn significant attention from academia to industry recently.

Relaxed Clipping: A Global Training Method for Robust Regression and Classification

no code implementations NeurIPS 2010 Min Yang, Linli Xu, Martha White, Dale Schuurmans, Yao-Liang Yu

We present a generic procedure that can be applied to standard loss functions and demonstrate improved robustness in regression and classification problems.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.