Search Results for author: Yicheng Feng

Found 5 papers, 0 papers with code

UniCode: Learning a Unified Codebook for Multimodal Large Language Models

no code implementations14 Mar 2024 Sipeng Zheng, Bohan Zhou, Yicheng Feng, Ye Wang, Zongqing Lu

In this paper, we propose \textbf{UniCode}, a novel approach within the domain of multimodal large language models (MLLMs) that learns a unified codebook to efficiently tokenize visual, text, and potentially other types of signals.

Quantization Visual Question Answering (VQA)

Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds

no code implementations20 Oct 2023 Sipeng Zheng, Jiazheng Liu, Yicheng Feng, Zongqing Lu

Steve-Eye integrates the LLM with a visual encoder which enables it to process visual-text inputs and generate multimodal feedback.

LLaMA Rider: Spurring Large Language Models to Explore the Open World

no code implementations13 Oct 2023 Yicheng Feng, Yuxuan Wang, Jiazheng Liu, Sipeng Zheng, Zongqing Lu

Recently, various studies have leveraged Large Language Models (LLMs) to help decision-making and planning in environments, and try to align the LLMs' knowledge with the world conditions.

Decision Making

Learning Multi-Object Positional Relationships via Emergent Communication

no code implementations16 Feb 2023 Yicheng Feng, Boshi An, Zongqing Lu

The study of emergent communication has been dedicated to interactive artificial intelligence.

Object

Multi-Agent Language Learning: Symbolic Mapping

no code implementations29 Sep 2021 Yicheng Feng, Zongqing Lu

We find that symbolic mapping learned in simple referential games can notably promote language learning in difficult tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.