no code implementations • NAACL (AutoSimTrans) 2021 • Shaolei Zhang, Yang Feng
Aiming at the robustness of ST, we first propose char-level simultaneous translation and applied wait-k policy on it.
1 code implementation • 12 Mar 2024 • Tian Yu, Shaolei Zhang, Yang Feng
Although large language models (LLMs) have demonstrated impressive text generation capabilities, they are easily misled by the untruthful context provided by users or knowledge augmentation tools, thereby producing hallucinations.
1 code implementation • 27 Feb 2024 • Shaolei Zhang, Tian Yu, Yang Feng
During inference, by editing LLM's internal representations in truthful space, TruthX effectively enhances the truthfulness of LLMs.
Ranked #2 on Question Answering on TruthfulQA
1 code implementation • 20 Feb 2024 • Shoutao Guo, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng
We propose SiLLM, which delegates the two sub-tasks to separate agents, thereby incorporating LLM into SiMT.
no code implementations • NeurIPS 2023 • Shaolei Zhang, Yang Feng
To accomplish this, Seg2Seg introduces a latent segment as the pivot between source to target and explores all potential source-target mappings via the proposed expectation training, thereby learning the optimal moments for generating.
1 code implementation • 23 Oct 2023 • Zhengrui Ma, Shaolei Zhang, Shoutao Guo, Chenze Shao, Min Zhang, Yang Feng
Simultaneous machine translation (SiMT) models are trained to strike a balance between latency and translation quality.
no code implementations • 20 Oct 2023 • Shoutao Guo, Shaolei Zhang, Yang Feng
Training the model with ground-truth at low latency may introduce forced anticipations, whereas utilizing reference consistent with the source word order at high latency results in performance degradation.
1 code implementation • 12 Sep 2023 • Shoutao Guo, Shaolei Zhang, Yang Feng
Simultaneous machine translation (SiMT) outputs translation while reading the source sentence.
1 code implementation • 19 Jun 2023 • Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng
To minimize human workload, we propose to transfer the capabilities of language generation and instruction following from English to other languages through an interactive translation task.
1 code implementation • 25 May 2023 • Shaolei Zhang, Yang Feng
Therefore, learning to segment the speech inputs at those moments that are beneficial for the translation model to produce high-quality translation is the key to SimulST.
1 code implementation • 22 May 2023 • Shoutao Guo, Shaolei Zhang, Yang Feng
Simultaneous machine translation (SiMT) starts to output translation while reading the source sentence and needs a precise policy to decide when to output the generated translation.
1 code implementation • 1 Mar 2023 • Shaolei Zhang, Yang Feng
Simultaneous machine translation (SiMT) outputs the target sequence while receiving the source sequence, and hence learning when to start translating each target token is the core challenge for SiMT task.
no code implementations • 14 Nov 2022 • Baoshun Shi, Ke Jiang, Shaolei Zhang, Qiusheng Lian, Yanwei Qin
Recent deep learning-based methods have achieved promising performance for computed tomography metal artifact reduction (CTMAR).
1 code implementation • 22 Oct 2022 • Shaolei Zhang, Yang Feng
Simultaneous translation (ST) outputs translation while receiving the source inputs, and hence requires a policy to determine whether to translate a target token or wait for the next source token.
1 code implementation • 21 Oct 2022 • Shoutao Guo, Shaolei Zhang, Yang Feng
Compared to the fixed policy, the adaptive policy achieves better latency-quality tradeoffs by adopting a flexible translation policy.
1 code implementation • 20 Oct 2022 • Shaolei Zhang, Shoutao Guo, Yang Feng
In this paper, we propose a Wait-info Policy to balance source and target at the information level.
no code implementations • ACL 2022 • Shaolei Zhang, Yang Feng
Simultaneous machine translation (SiMT) starts translating while receiving the streaming source inputs, and hence the source sentence is always incomplete during translating.
1 code implementation • Findings (ACL) 2022 • Shaolei Zhang, Yang Feng
For SiMT policy, GMA models the aligned source position of each target word, and accordingly waits until its aligned position to start translating.
1 code implementation • ACL 2022 • Shaolei Zhang, Yang Feng
According to duality constraints, the read/write path in source-to-target and target-to-source SiMT models can be mapped to each other.
no code implementations • Findings (EMNLP) 2021 • Shaolei Zhang, Yang Feng
Cross-attention is an important component of neural machine translation (NMT), which is always realized by dot-product attention in previous methods.
1 code implementation • EMNLP 2021 • Shaolei Zhang, Yang Feng
Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency.
no code implementations • 23 Dec 2020 • Shaolei Zhang, Yang Feng, Liangyou Li
Simultaneous translation (ST) starts translations synchronously while reading source sentences, and is used in many online scenarios.