Search Results for author: Tao Ge

Found 56 papers, 24 papers with code

LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models

no code implementations1 Apr 2024 Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Adrian de Wynter, Yan Xia, Wenshan Wu, Ting Song, Man Lan, Furu Wei

This paper presents a comprehensive survey of the current status and opportunities for Large Language Models (LLMs) in strategic reasoning, a sophisticated form of reasoning that necessitates understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.

Decision Making

K-Level Reasoning with Large Language Models

no code implementations2 Feb 2024 Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Yan Xia, Man Lan, Furu Wei

While Large Language Models (LLMs) have demonstrated their proficiency in complex reasoning tasks, their performance in dynamic, interactive, and competitive scenarios - such as business strategy and stock market analysis - remains underexplored.

Decision Making

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

1 code implementation15 Jan 2024 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.

Language Modelling Large Language Model

ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic Decision-Making with AI Agents

1 code implementation6 Nov 2023 Shaoguang Mao, Yuzhe Cai, Yan Xia, Wenshan Wu, Xun Wang, Fengyi Wang, Tao Ge, Furu Wei

This paper introduces Alympics (Olympics for Agents), a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.

Decision Making Language Modelling +1

SCALE: Synergized Collaboration of Asymmetric Language Translation Engines

1 code implementation29 Sep 2023 Xin Cheng, Xun Wang, Tao Ge, Si-Qing Chen, Furu Wei, Dongyan Zhao, Rui Yan

In this paper, we introduce SCALE, a collaborative framework that connects compact Specialized Translation Models (STMs) and general-purpose Large Language Models (LLMs) as one unified translation engine.

Continual Learning Translation

In-context Autoencoder for Context Compression in a Large Language Model

1 code implementation13 Jul 2023 Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, Furu Wei

We propose the In-context Autoencoder (ICAE), leveraging the power of a large language models (LLM) to compress a long context into short compact memory slots that can be directly conditioned on by the LLM for various purposes.

Language Modelling Large Language Model +3

Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration

2 code implementations11 Jul 2023 Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji

In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.

Hallucination Logic Grid Puzzle

Smart Word Suggestions for Writing Assistance

1 code implementation17 May 2023 Chenshuo Wang, Shaoguang Mao, Tao Ge, Wenshan Wu, Xun Wang, Yan Xia, Jonathan Tien, Dongyan Zhao

The training dataset comprises over 3. 7 million sentences and 12. 7 million suggestions generated through rules.

Low-code LLM: Graphical User Interface over Large Language Models

2 code implementations17 Apr 2023 Yuzhe Cai, Shaoguang Mao, Wenshan Wu, Zehua Wang, Yaobo Liang, Tao Ge, Chenfei Wu, Wang You, Ting Song, Yan Xia, Jonathan Tien, Nan Duan, Furu Wei

By introducing this framework, we aim to bridge the gap between humans and LLMs, enabling more effective and efficient utilization of LLMs for complex tasks.

Prompt Engineering

Semiparametric Language Models Are Scalable Continual Learners

no code implementations2 Mar 2023 Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, Houfeng Wang

We demonstrate that SeMem improves the scalability of semiparametric LMs for continual learning over streaming data in two ways: (1) data-wise scalability: as the model becomes stronger through continual learning, it will encounter fewer difficult cases that need to be memorized, causing the growth of the non-parametric memory to slow down over time rather than growing at a linear rate with the size of training data; (2) model-wise scalability: SeMem allows a larger model to memorize fewer samples than its smaller counterpart because it is rarer for a larger model to encounter incomprehensible cases, resulting in a non-parametric memory that does not scale linearly with model size.

Continual Learning Language Modelling +1

MB-DECTNet: A Model-Based Unrolled Network for Accurate 3D DECT Reconstruction

no code implementations1 Feb 2023 Tao Ge, Maria Medrano, Rui Liao, David G. Politte, Jeffrey F. Williamson, Bruce R. Whiting, Joseph A. O'Sullivan

Therefore, to improve its convergence, we have embedded DECT SIR into a deep learning model-based unrolled network for 3D DECT reconstruction (MB-DECTNet) that can be trained in an end-to-end fashion.

Pay Attention to Your Tone: Introducing a New Dataset for Polite Language Rewrite

no code implementations20 Dec 2022 Xun Wang, Tao Ge, Allen Mao, Yuki Li, Furu Wei, Si-Qing Chen

We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite which is a novel sentence rewrite task.

Sentence Style Transfer +1

DL-Corrector-Remapper: A grid-free bias-correction deep learning methodology for data-driven high-resolution global weather forecasting

no code implementations21 Oct 2022 Tao Ge, Jaideep Pathak, Akshay Subramaniam, Karthik Kashinath

The improvement in DLCR's performance against the gold standard ground truth over the baseline's performance shows its potential to correct, remap, and fine-tune the mesh-gridded forecasts under the supervision of observations.

Weather Forecasting

Lossless Acceleration for Seq2seq Generation with Aggressive Decoding

2 code implementations20 May 2022 Tao Ge, Heming Xia, Xin Sun, Si-Qing Chen, Furu Wei

We study lossless acceleration for seq2seq generation with a novel decoding algorithm -- Aggressive Decoding.

Abstractive Text Summarization Grammatical Error Correction +4

Text Revision by On-the-Fly Representation Optimization

1 code implementation In2Writing (ACL) 2022 Jingjing Li, Zichao Li, Tao Ge, Irwin King, Michael R. Lyu

In this approach, we simply fine-tune a pre-trained Transformer with masked language modeling and attribute classification.

Attribute Language Modelling +3

Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation

2 code implementations30 Mar 2022 Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, Zhifang Sui

We propose Speculative Decoding (SpecDec), for the first time ever, to formally study exploiting the idea of speculative execution to accelerate autoregressive (AR) decoding.

Abstractive Text Summarization Machine Translation +1

EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation

1 code implementation16 Feb 2022 Tao Ge, Si-Qing Chen, Furu Wei

We introduce EdgeFormer -- a parameter-efficient Transformer for on-device seq2seq generation under the strict computation and memory constraints.

Grammatical Error Correction Knowledge Distillation +2

A Metal Artifact Reduction Scheme For Accurate Iterative Dual-Energy CT Algorithms

no code implementations31 Jan 2022 Tao Ge, Maria Medrano, Rui Liao, Jeffrey F. Williamson, David G. Politte, Bruce R. Whiting, Joseph A. O'Sullivan

We compared DEAM with the proposed method to the original DEAM and vendor reconstructions with and without metal-artifact reduction for orthopedic implants (O-MAR).

Metal Artifact Reduction

A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model

no code implementations26 Jan 2022 Xin Sun, Tao Ge, Shuming Ma, Jingjing Li, Furu Wei, Houfeng Wang

Synthetic data construction of Grammatical Error Correction (GEC) for non-English languages relies heavily on human-designed and language-specific rules, which produce limited error-corrected patterns.

Grammatical Error Correction Language Modelling +3

Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression

1 code implementation EMNLP 2021 Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, Furu Wei

Recent studies on compression of pretrained language models (e. g., BERT) usually use preserved accuracy as the metric for evaluation.

Knowledge Distillation Quantization

Instantaneous Grammatical Error Correction with Shallow Aggressive Decoding

1 code implementation ACL 2021 Xin Sun, Tao Ge, Furu Wei, Houfeng Wang

In this paper, we propose Shallow Aggressive Decoding (SAD) to improve the online inference efficiency of the Transformer for instantaneous Grammatical Error Correction (GEC).

Grammatical Error Correction

Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting

1 code implementation EMNLP 2021 Wangchunshu Zhou, Tao Ge, Canwen Xu, Ke Xu, Furu Wei

In this paper, we generalize text infilling (e. g., masked language models) by proposing Sequence Span Rewriting (SSR) as a self-supervised sequence-to-sequence (seq2seq) pre-training objective.

Sentence Text Infilling

Improving the Efficiency of Grammatical Error Correction with Erroneous Span Detection and Correction

no code implementations EMNLP 2020 Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou

We propose a novel language-independent approach to improve the efficiency for Grammatical Error Correction (GEC) by dividing the task into two subtasks: Erroneous Span Detection (ESD) and Erroneous Span Correction (ESC).

Grammatical Error Correction Sentence

BERT Loses Patience: Fast and Robust Inference with Early Exit

1 code implementation NeurIPS 2020 Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, Furu Wei

In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM).

Language Modelling

Parallel Data Augmentation for Formality Style Transfer

1 code implementation ACL 2020 Yi Zhang, Tao Ge, Xu sun

The main barrier to progress in the task of Formality Style Transfer is the inadequacy of training data.

Data Augmentation Formality Style Transfer +2

Scheduled DropHead: A Regularization Method for Transformer Models

1 code implementation Findings of the Association for Computational Linguistics 2020 Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou

In this paper, we introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism, which is a key component of transformer, a state-of-the-art model for various NLP tasks.

Machine Translation text-classification +2

Pseudo-Bidirectional Decoding for Local Sequence Transduction

no code implementations Findings of the Association for Computational Linguistics 2020 Wangchunshu Zhou, Tao Ge, Ke Xu

PBD copies the corresponding representation of source tokens to the decoder as pseudo future context to enable the decoder to attends to its bi-directional context.

Grammatical Error Correction Inductive Bias +1

Self-Adversarial Learning with Comparative Discrimination for Text Generation

no code implementations ICLR 2020 Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou

Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples.

Sentence Text Generation

Fact-aware Sentence Split and Rephrase with Permutation Invariant Training

no code implementations16 Jan 2020 Yinuo Guo, Tao Ge, Furu Wei

To overcome the challenges, we first propose the Fact-aware Sentence Encoding, which enables the model to learn facts from the long sentence and thus improves the precision of sentence split; then we introduce Permutation Invariant Training to alleviate the effects of order variance in seq2seq learning for this task.

Sentence Split and Rephrase

Improving Grammatical Error Correction with Machine Translation Pairs

1 code implementation Findings of the Association for Computational Linguistics 2020 Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, Ming Zhou

The poor translation model resembles the ESL (English as a second language) learner and tends to generate translations of low quality in terms of fluency and grammatical correctness, while the good translation model generally generates fluent and grammatically correct translations.

Grammatical Error Correction Language Modelling +3

BERT-based Lexical Substitution

1 code implementation ACL 2019 Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou

Our approach first applies dropout to the target word{'}s embedding for partially masking the word, allowing BERT to take balanced consideration of the target word{'}s semantics and contexts for proposing substitute candidates, and then validates the candidates based on their substitution{'}s influence on the global contextualized representation of the sentence.

Sentence

Fluency Boost Learning and Inference for Neural Grammatical Error Correction

no code implementations ACL 2018 Tao Ge, Furu Wei, Ming Zhou

Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data; (2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference.

Grammatical Error Correction Sentence

Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion

no code implementations WS 2015 Yue Liu, Tao Ge, Kusum S. Mathews, Heng Ji, Deborah L. McGuinness

In the medical domain, identifying and expanding abbreviations in clinical texts is a vital task for both better human and machine understanding.

Word Embeddings

Towards Time-Aware Knowledge Graph Completion

no code implementations COLING 2016 Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui

In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.

Question Answering Relation Extraction +1

Event Detection with Burst Information Networks

no code implementations COLING 2016 Tao Ge, Lei Cui, Baobao Chang, Zhifang Sui, Ming Zhou

Retrospective event detection is an important task for discovering previously unidentified events in a text stream.

Clustering Event Detection

Aligning Coordinated Text Streams through Burst Information Network Construction and Decipherment

no code implementations27 Sep 2016 Tao Ge, Qing Dou, Xiaoman Pan, Heng Ji, Lei Cui, Baobao Chang, Zhifang Sui, Ming Zhou

We introduce a novel Burst Information Network (BINet) representation that can display the most important information and illustrate the connections among bursty entities, events and keywords in the corpus.

Decipherment Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.