Search Results for author: Longyue Wang

Found 44 papers, 23 papers with code

Tencent Translation System for the WMT21 News Translation Task

no code implementations WMT (EMNLP) 2021 Longyue Wang, Mu Li, Fangxu Liu, Shuming Shi, Zhaopeng Tu, Xing Wang, Shuangzhi Wu, Jiali Zeng, Wen Zhang

Based on our success in the last WMT, we continuously employed advanced techniques such as large batch training, data selection and data filtering.

Data Augmentation Translation

Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation

1 code implementation ACL 2022 Liang Ding, Longyue Wang, Shuming Shi, DaCheng Tao, Zhaopeng Tu

In this work, we provide an appealing alternative for NAT – monolingual KD, which trains NAT student on external monolingual data with AT teacher trained on the original bilingual data.

Knowledge Distillation Translation +1

Tencent AI Lab Machine Translation Systems for the WMT20 Biomedical Translation Task

1 code implementation WMT (EMNLP) 2020 Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi

This paper describes the Tencent AI Lab submission of the WMT2020 shared task on biomedical translation in four language directions: German<->English, English<->German, Chinese<->English and English<->Chinese.

Machine Translation Translation

Instruction Multi-Constraint Molecular Generation Using a Teacher-Student Large Language Model

1 code implementation20 Mar 2024 Peng Zhou, Jianmin Wang, Chunyan Li, Zixu Wang, Yiping Liu, Siqi Sun, Jianxin Lin, Longyue Wang, Xiangxiang Zeng

While various models and computational tools have been proposed for structure and property analysis of molecules, generating molecules that conform to all desired structures and properties remains a challenge.

Drug Discovery Knowledge Distillation +2

Anchor-based Large Language Models

no code implementations12 Feb 2024 Jianhui Pang, Fanghua Ye, Derek F. Wong, Longyue Wang

Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation.

Computational Efficiency Question Answering

Benchmarking LLMs via Uncertainty Quantification

1 code implementation23 Jan 2024 Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, Zhaopeng Tu

The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods.

Benchmarking Uncertainty Quantification

Salute the Classic: Revisiting Challenges of Machine Translation in the Age of Large Language Models

1 code implementation16 Jan 2024 Jianhui Pang, Fanghua Ye, Longyue Wang, Dian Yu, Derek F. Wong, Shuming Shi, Zhaopeng Tu

This study revisits these challenges, offering insights into their ongoing relevance in the context of advanced Large Language Models (LLMs): domain mismatch, amount of parallel data, rare word prediction, translation of long sentences, attention model as word alignment, and sub-optimal beam search.

Machine Translation NMT +2

DrugAssist: A Large Language Model for Molecule Optimization

1 code implementation28 Dec 2023 Geyan Ye, Xibao Cai, Houtim Lai, Xing Wang, Junhong Huang, Longyue Wang, Wei Liu, Xiangxiang Zeng

Recently, the impressive performance of large language models (LLMs) on a wide range of tasks has attracted an increasing number of attempts to apply LLMs in drug discovery.

Drug Discovery Language Modelling +1

On Diversified Preferences of Large Language Model Alignment

1 code implementation12 Dec 2023 Dun Zeng, Yong Dai, Pengyu Cheng, Longyue Wang, Tianhao Hu, Wanshun Chen, Nan Du, Zenglin Xu

Our analysis reveals a correlation between the calibration performance of reward models (RMs) and the alignment performance of LLMs.

Language Modelling Large Language Model

Retrieval-augmented Multi-modal Chain-of-Thoughts Reasoning for Large Language Models

no code implementations4 Dec 2023 Bingshuai Liu, Chenyang Lyu, Zijun Min, Zhanyu Wang, Jinsong Su, Longyue Wang

The advancement of Large Language Models (LLMs) has brought substantial attention to the Chain of Thought (CoT) approach, primarily due to its ability to enhance the capability of LLMs on complex reasoning tasks.

Question Answering Retrieval

Alternate Diverse Teaching for Semi-supervised Medical Image Segmentation

1 code implementation29 Nov 2023 Zhen Zhao, Zicheng Wang, Longyue Wang, Yixuan Yuan, Luping Zhou

To mitigate the confirmation bias from the diverse supervision, the core of AD-MT lies in two proposed modules: the Random Periodic Alternate (RPA) Updating Module and the Conflict-Combating Module (CCM).

Data Augmentation Image Segmentation +2

GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation

no code implementations25 Nov 2023 Zhanyu Wang, Longyue Wang, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, Zhaopeng Tu

While the recent advances in Multimodal Large Language Models (MLLMs) constitute a significant leap forward in the field, these models are predominantly confined to the realm of input-side multimodal comprehension, lacking the capacity for multimodal content generation.

Instruction Following Language Modelling +7

A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question Answering

no code implementations13 Nov 2023 Yunxin Li, Longyue Wang, Baotian Hu, Xinyu Chen, Wanqi Zhong, Chenyang Lyu, Wei Wang, Min Zhang

The emergence of multimodal large models (MLMs) has significantly advanced the field of visual understanding, offering remarkable capabilities in the realm of visual question answering (VQA).

Decision Making General Knowledge +3

A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis

no code implementations31 Oct 2023 Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lei Wang, Lingqiao Liu, Leyang Cui, Zhaopeng Tu, Longyue Wang, Luping Zhou

This work conducts an evaluation of GPT-4V's multimodal capability for medical image analysis, with a focus on three representative tasks of radiology report generation, medical visual question answering, and medical visual grounding.

Descriptive Medical Visual Question Answering +3

A Benchmark for Text Expansion: Datasets, Metrics, and Baselines

no code implementations17 Sep 2023 Yi Chen, Haiyun Jiang, Wei Bi, Rui Wang, Longyue Wang, Shuming Shi, Ruifeng Xu

This work presents a new task of Text Expansion (TE), which aims to insert fine-grained modifiers into proper locations of the plain text to concretize or vivify human writings.

2k Informativeness +1

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

1 code implementation3 Sep 2023 Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.

Hallucination World Knowledge

Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling

no code implementations16 Jul 2023 Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu

Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP).

Language Modelling Sentence

On the Cultural Gap in Text-to-Image Generation

no code implementations6 Jul 2023 Bingshuai Liu, Longyue Wang, Chenyang Lyu, Yong Zhang, Jinsong Su, Shuming Shi, Zhaopeng Tu

Accordingly, we propose a novel multi-modal metric that considers object-text alignment to filter the fine-tuning data in the target culture, which is used to fine-tune a T2I model to improve cross-cultural generation.

Text-to-Image Generation

Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

1 code implementation15 Jun 2023 Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu

Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied.

Language Modelling

How Does Pretraining Improve Discourse-Aware Translation?

no code implementations31 May 2023 Zhihong Huang, Longyue Wang, Siyou Liu, Derek F. Wong

To bridge this gap, we introduce a probing task to interpret the ability of PLMs to capture discourse relation knowledge.

Machine Translation NMT +1

TaleCrafter: Interactive Story Visualization with Multiple Characters

1 code implementation29 May 2023 Yuan Gong, Youxin Pang, Xiaodong Cun, Menghan Xia, Yingqing He, Haoxin Chen, Longyue Wang, Yong Zhang, Xintao Wang, Ying Shan, Yujiu Yang

Accurate Story visualization requires several necessary elements, such as identity consistency across frames, the alignment between plain text and visual content, and a reasonable layout of objects in images.

Story Visualization Text-to-Image Generation

Revisiting Non-Autoregressive Translation at Scale

1 code implementation25 May 2023 Zhihao Wang, Longyue Wang, Jinsong Su, Junfeng Yao, Zhaopeng Tu

Experimental results on the large-scale WMT20 En-De show that the asymmetric architecture (e. g. bigger encoder and smaller decoder) can achieve comparable performance with the scaling model, while maintaining the superiority of decoding speed with standard NAT models.

Translation

Deepfake Text Detection in the Wild

1 code implementation22 May 2023 Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang

In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.

Face Swapping Story Generation +1

A Survey on Zero Pronoun Translation

no code implementations17 May 2023 Longyue Wang, Siyou Liu, Mingzhou Xu, Linfeng Song, Shuming Shi, Zhaopeng Tu

Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e. g. Chinese, Hungarian, and Hindi), but should be recalled in non-pro-drop languages (e. g. English).

Language Modelling Large Language Model +2

A Paradigm Shift: The Future of Machine Translation Lies with Large Language Models

no code implementations2 May 2023 Chenyang Lyu, Zefeng Du, Jitao Xu, Yitao Duan, Minghao Wu, Teresa Lynn, Alham Fikri Aji, Derek F. Wong, Siyou Liu, Longyue Wang

We conclude by emphasizing the critical role of LLMs in guiding the future evolution of MT and offer a roadmap for future exploration in the sector.

Document Translation Machine Translation +2

Prompt-Learning for Cross-Lingual Relation Extraction

1 code implementation20 Apr 2023 Chiaming Hsu, Changtong Zan, Liang Ding, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, Wenbin Hu

Experiments on WMT17-EnZh XRE also show the effectiveness of our Prompt-XRE against other competitive baselines.

Relation Relation Extraction +1

Document-Level Machine Translation with Large Language Models

1 code implementation5 Apr 2023 Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu

Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.

Document Level Machine Translation Machine Translation +1

Search-Engine-augmented Dialogue Response Generation with Cheaply Supervised Query Production

1 code implementation16 Feb 2023 Ante Wang, Linfeng Song, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, Dong Yu

We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.

Chatbot Response Generation

ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation

no code implementations COLING 2022 Cunxiao Du, Zhaopeng Tu, Longyue Wang, Jing Jiang

Recently, a new training oaxe loss has proven effective to ameliorate the effect of multimodality for non-autoregressive translation (NAT), which removes the penalty of word order errors in the standard cross-entropy loss.

Machine Translation Sentence +1

On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation

1 code implementation Findings (EMNLP) 2021 Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu

Pre-training (PT) and back-translation (BT) are two simple and powerful methods to utilize monolingual data for improving the model performance of neural machine translation (NMT).

Machine Translation NMT +2

Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation

1 code implementation ACL 2021 Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, DaCheng Tao, Zhaopeng Tu

Results demonstrate that the proposed approach can significantly and universally improve translation quality by reducing translation errors on low-frequency words.

Knowledge Distillation Translation

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning

1 code implementation ICLR 2021 Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Zhaopeng Tu

Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks.

Grammatical Error Correction Machine Translation +3

Understanding and Improving Lexical Choice in Non-Autoregressive Translation

no code implementations ICLR 2021 Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, DaCheng Tao, Zhaopeng Tu

To this end, we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data.

Knowledge Distillation Translation

Context-Aware Cross-Attention for Non-Autoregressive Translation

1 code implementation COLING 2020 Liang Ding, Longyue Wang, Di wu, DaCheng Tao, Zhaopeng Tu

Non-autoregressive translation (NAT) significantly accelerates the inference process by predicting the entire target sequence.

Translation

On the Sub-Layer Functionalities of Transformer Decoder

no code implementations Findings of the Association for Computational Linguistics 2020 Yilin Yang, Longyue Wang, Shuming Shi, Prasad Tadepalli, Stefan Lee, Zhaopeng Tu

There have been significant efforts to interpret the encoder of Transformer-based encoder-decoder architectures for neural machine translation (NMT); meanwhile, the decoder remains largely unexamined despite its critical role.

Machine Translation NMT +1

On the Sparsity of Neural Machine Translation Models

no code implementations EMNLP 2020 Yong Wang, Longyue Wang, Victor O. K. Li, Zhaopeng Tu

Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.