Search Results for author: Pengguang Chen

Found 16 papers, 10 papers with code

VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning

no code implementations22 Feb 2024 Jingyao Li, Pengguang Chen, Xuan Ju, Hong Xu, Jiaya Jia

Our research aims to bridge the domain gap between natural and artificial scenarios with efficient tuning strategies.

Pose Estimation

MOODv2: Masked Image Modeling for Out-of-Distribution Detection

no code implementations5 Jan 2024 Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia

The crux of effective out-of-distribution (OOD) detection lies in acquiring a robust in-distribution (ID) representation, distinct from OOD samples.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

MR-GSM8K: A Meta-Reasoning Revolution in Large Language Model Evaluation

2 code implementations28 Dec 2023 Zhongshen Zeng, Pengguang Chen, Shu Liu, Haiyun Jiang, Jiaya Jia

In this work, we introduce a novel evaluation paradigm for Large Language Models, one that challenges them to engage in meta-reasoning.

GSM8K Language Modelling +2

BAL: Balancing Diversity and Novelty for Active Learning

1 code implementation26 Dec 2023 Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia

Experimental results demonstrate that, when labeling 80% of the samples, the performance of the current SOTA method declines by 0. 74%, whereas our proposed BAL achieves performance comparable to the full dataset.

Active Learning Self-Supervised Learning

MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks

1 code implementation26 Dec 2023 Jingyao Li, Pengguang Chen, Jiaya Jia

Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks.

 Ranked #1 on Code Generation on CodeContests (Test Set pass@1 metric)

Code Generation

Dual-Balancing for Multi-Task Learning

1 code implementation23 Aug 2023 Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, James T. Kwok

Multi-task learning (MTL), a learning paradigm to learn multiple related tasks simultaneously, has achieved great success in various fields.

Multi-Task Learning

TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation

no code implementations15 Apr 2023 Jingyao Li, Pengguang Chen, Shengju Qian, Jiaya Jia

However, existing models easily misidentify input pixels from unseen classes, thus confusing novel classes with semantically-similar ones.

Language Modelling Open Vocabulary Semantic Segmentation +2

Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need

1 code implementation CVPR 2023 Jingyao Li, Pengguang Chen, Shaozuo Yu, Zexin He, Shu Liu, Jiaya Jia

The core of out-of-distribution (OOD) detection is to learn the in-distribution (ID) representation, which is distinguishable from OOD samples.

Out-of-Distribution Detection

SEA: Bridging the Gap Between One- and Two-stage Detector Distillation via SEmantic-aware Alignment

no code implementations2 Mar 2022 Yixin Chen, Zhuotao Tian, Pengguang Chen, Shu Liu, Jiaya Jia

We revisit the one- and two-stage detector distillation tasks and present a simple and efficient semantic-aware framework to fill the gap between them.

Instance Segmentation object-detection +2

Deep Structured Instance Graph for Distilling Object Detectors

1 code implementation ICCV 2021 Yixin Chen, Pengguang Chen, Shu Liu, LiWei Wang, Jiaya Jia

Effectively structuring deep knowledge plays a pivotal role in transfer from teacher to student, especially in semantic vision tasks.

Instance Segmentation Knowledge Distillation +5

Exploring and Improving Mobile Level Vision Transformers

no code implementations30 Aug 2021 Pengguang Chen, Yixin Chen, Shu Liu, MingChang Yang, Jiaya Jia

We analyze the reason behind this phenomenon, and propose a novel irregular patch embedding module and adaptive patch fusion module to improve the performance.

Distilling Knowledge via Knowledge Review

7 code implementations CVPR 2021 Pengguang Chen, Shu Liu, Hengshuang Zhao, Jiaya Jia

Knowledge distillation transfers knowledge from the teacher network to the student one, with the goal of greatly improving the performance of the student network.

Instance Segmentation Knowledge Distillation +3

GridMask Data Augmentation

7 code implementations13 Jan 2020 Pengguang Chen, Shu Liu, Hengshuang Zhao, Xingquan Wang, Jiaya Jia

Then we show limitation of existing information dropping algorithms and propose our structured method, which is simple and yet very effective.

Data Augmentation object-detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.