Search Results for author: Haoqin Tu

Found 9 papers, 8 papers with code

Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning

no code implementations18 Dec 2023 Bingchen Zhao, Haoqin Tu, Chen Wei, Jieru Mei, Cihang Xie

This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs).

Domain Adaptation

How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs

1 code implementation27 Nov 2023 Haoqin Tu, Chenhang Cui, Zijun Wang, Yiyang Zhou, Bingchen Zhao, Junlin Han, Wangchunshu Zhou, Huaxiu Yao, Cihang Xie

Different from prior studies, we shift our focus from evaluating standard performance to introducing a comprehensive safety evaluation suite, covering both out-of-distribution (OOD) generalization and adversarial robustness.

Adversarial Robustness Visual Question Answering (VQA) +1

Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics

1 code implementation13 Sep 2023 Haoqin Tu, Bingchen Zhao, Chen Wei, Cihang Xie

Multi-modal large language models (MLLMs) are trained based on large language models (LLM), with an enhanced capability to comprehend multi-modal inputs and generate textual responses.

Ethics

ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles

1 code implementation29 Jun 2023 Haoqin Tu, Bowen Yang, Xianfeng Zhao

Automatically generating textual content with desired attributes is an ambitious task that people have pursued long.

News Generation Sentence

ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue

1 code implementation23 May 2023 Haoqin Tu, Yitong Li, Fei Mi, Zhongliang Yang

To demonstrate the superiority and universality of the provided visual knowledge, we propose a simple but effective framework ReSee to add visual representation into vanilla dialogue models by modality concatenations.

An Overview on Controllable Text Generation via Variational Auto-Encoders

1 code implementation15 Nov 2022 Haoqin Tu, Yitong Li

Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans and able to understand natural language.

Text Generation

PCAE: A Framework of Plug-in Conditional Auto-Encoder for Controllable Text Generation

1 code implementation7 Oct 2022 Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Siyu Zhang, Yongfeng Huang

Visualization of the local latent prior well confirms the primary devotion in hidden space of the proposed model.

Text Generation

AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling

1 code implementation12 May 2022 Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Yongfeng Huang

Variational Auto-Encoder (VAE) has become the de-facto learning paradigm in achieving representation learning and generation for natural language at the same time.

Conditional Text Generation Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.