no code implementations • 14 Mar 2024 • Sipeng Zheng, Bohan Zhou, Yicheng Feng, Ye Wang, Zongqing Lu
In this paper, we propose \textbf{UniCode}, a novel approach within the domain of multimodal large language models (MLLMs) that learns a unified codebook to efficiently tokenize visual, text, and potentially other types of signals.
no code implementations • 20 Oct 2023 • Sipeng Zheng, Jiazheng Liu, Yicheng Feng, Zongqing Lu
Steve-Eye integrates the LLM with a visual encoder which enables it to process visual-text inputs and generate multimodal feedback.
no code implementations • 13 Oct 2023 • Yicheng Feng, Yuxuan Wang, Jiazheng Liu, Sipeng Zheng, Zongqing Lu
Recently, various studies have leveraged Large Language Models (LLMs) to help decision-making and planning in environments, and try to align the LLMs' knowledge with the world conditions.
no code implementations • 16 Feb 2023 • Yicheng Feng, Boshi An, Zongqing Lu
The study of emergent communication has been dedicated to interactive artificial intelligence.
no code implementations • 29 Sep 2021 • Yicheng Feng, Zongqing Lu
We find that symbolic mapping learned in simple referential games can notably promote language learning in difficult tasks.