Search Results for author: Ronghao Dang

Found 9 papers, 4 papers with code

Vision-and-Language Navigation via Causal Learning

1 code implementation16 Apr 2024 Liuyi Wang, Zongtao He, Ronghao Dang, Mengjiao Shen, Chengju Liu, Qijun Chen

In the pursuit of robust and generalizable environment perception and language understanding, the ubiquitous challenge of dataset bias continues to plague vision-and-language navigation (VLN) agents, hindering their performance in unseen environments.

Causal Inference Contrastive Learning +1

Causality-based Cross-Modal Representation Learning for Vision-and-Language Navigation

no code implementations6 Mar 2024 Liuyi Wang, Zongtao He, Ronghao Dang, Huiyi Chen, Chengju Liu, Qijun Chen

Vision-and-Language Navigation (VLN) has gained significant research interest in recent years due to its potential applications in real-world scenarios.

Representation Learning Vision and Language Navigation

CLIPose: Category-Level Object Pose Estimation with Pre-trained Vision-Language Knowledge

no code implementations24 Feb 2024 Xiao Lin, Minghao Zhu, Ronghao Dang, Guangliang Zhou, Shaolong Shu, Feng Lin, Chengju Liu, Qijun Chen

Inspired by this motivation, we propose CLIPose, a novel 6D pose framework that employs the pre-trained vision-language model to develop better learning of object category information, which can fully leverage abundant semantic knowledge in image and text modalities.

Contrastive Learning Language Modelling +2

InstructDET: Diversifying Referring Object Detection with Generalized Instructions

1 code implementation8 Oct 2023 Ronghao Dang, Jiangyan Feng, Haodong Zhang, Chongjian Ge, Lin Song, Lijun Gong, Chengju Liu, Qijun Chen, Feng Zhu, Rui Zhao, Yibing Song

In order to encompass common detection expressions, we involve emerging vision-language model (VLM) and large language model (LLM) to generate instructions guided by text prompts and object bbxs, as the generalizations of foundation models are effective to produce human-like expressions (e. g., describing object property, category, and relationship).

Language Modelling Large Language Model +4

Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning

1 code implementation1 Sep 2023 Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, Qijun Chen

As the most essential property in a video, motion information is critical to a robust and generalized video representation.

Contrastive Learning Representation Learning

A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation

1 code implementation5 May 2023 Liuyi Wang, Zongtao He, Jiagui Tang, Ronghao Dang, Naijia Wang, Chengju Liu, Qijun Chen

Vision-and-Language Navigation (VLN) is a realistic but challenging task that requires an agent to locate the target region using verbal and visual cues.

Vision and Language Navigation

Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation

no code implementations3 Feb 2023 Ronghao Dang, Lu Chen, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen

We propose a meta-ability decoupling (MAD) paradigm, which brings together various object navigation methods in an architecture system, allowing them to mutually enhance each other and evolve together.

Object

Unbiased Directed Object Attention Graph for Object Navigation

no code implementations9 Apr 2022 Ronghao Dang, Zhuofan Shi, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen

Thus, in this paper, we propose a directed object attention (DOA) graph to guide the agent in explicitly learning the attention relationships between objects, thereby reducing the object attention bias.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.