Search Results for author: Hongkun Hao

Found 9 papers, 2 papers with code

WavLLM: Towards Robust and Adaptive Speech Large Language Model

no code implementations31 Mar 2024 Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, Linquan Liu, Furu Wei

In this work, we introduce WavLLM, a robust and adaptive speech large language model with dual encoders, and a prompt-aware LoRA weight adapter, optimized by a two-stage curriculum learning approach.

Language Modelling Large Language Model

Improving Open-Ended Text Generation via Adaptive Decoding

1 code implementation28 Feb 2024 Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, Rui Wang

Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality.

Story Generation

Is Cognition and Action Consistent or Not: Investigating Large Language Model's Personality

no code implementations22 Feb 2024 Yiming Ai, Zhiwei He, Ziyin Zhang, Wenhong Zhu, Hongkun Hao, Kai Yu, Lingjun Chen, Rui Wang

In this study, we investigate the reliability of Large Language Models (LLMs) in professing human-like personality traits through responses to personality questionnaires.

Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models

no code implementations21 Feb 2024 Zhiwei He, Binglin Zhou, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, Rui Wang

Furthermore, we analyze two key factors that contribute to the cross-lingual consistency in text watermarking and propose a defense method that increases the AUC from 0. 67 to 0. 88 under CWRA.

TAG

Q-Refine: A Perceptual Quality Refiner for AI-Generated Image

no code implementations2 Jan 2024 Chunyi Li, HaoNing Wu, ZiCheng Zhang, Hongkun Hao, Kaiwei Zhang, Lei Bai, Xiaohong Liu, Xiongkuo Min, Weisi Lin, Guangtao Zhai

With the rapid evolution of the Text-to-Image (T2I) model in recent years, their unsatisfactory generation result has become a challenge.

Image Quality Assessment

Boosting Large Language Model for Speech Synthesis: An Empirical Study

no code implementations30 Dec 2023 Hongkun Hao, Long Zhou, Shujie Liu, Jinyu Li, Shujie Hu, Rui Wang, Furu Wei

In this paper, we conduct a comprehensive empirical exploration of boosting LLMs with the ability to generate speech, by combining pre-trained LLM LLaMA/OPT and text-to-speech synthesis model VALL-E. We compare three integration methods between LLMs and speech synthesis models, including directly fine-tuned LLMs, superposed layers of LLMs and VALL-E, and coupled LLMs and VALL-E using LLMs as a powerful text encoder.

Language Modelling Large Language Model +2

Penalty Decoding: Well Suppress the Self-Reinforcement Effect in Open-Ended Text Generation

1 code implementation23 Oct 2023 Wenhong Zhu, Hongkun Hao, Rui Wang

This paper investigates the self-reinforcement effect in text generation and the effectiveness of a repetition penalty to mitigate it.

Text Generation

Rethinking Translation Memory Augmented Neural Machine Translation

no code implementations12 Jun 2023 Hongkun Hao, Guoping Huang, Lemao Liu, Zhirui Zhang, Shuming Shi, Rui Wang

The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i. e., lower bias) but is more sensitive to the fluctuations in the training data (i. e., higher variance), which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances vanilla NMT under the high-resource scenario whereas it fails under the low-resource scenario.

Machine Translation NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.