Search Results for author: Junkun Yuan

Found 16 papers, 6 papers with code

HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception

1 code implementation NeurIPS 2023 Junkun Yuan, Xinyu Zhang, Hao Zhou, Jian Wang, Zhongwei Qiu, Zhiyin Shao, Shaofeng Zhang, Sifan Long, Kun Kuang, Kun Yao, Junyu Han, Errui Ding, Lanfen Lin, Fei Wu, Jingdong Wang

To further capture human characteristics, we propose a structure-invariant alignment loss that enforces different masked views, guided by the human part prior, to be closely aligned for the same image.

2D Pose Estimation Attribute +3

Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse

no code implementations28 Jun 2023 Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu

It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.

Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization

no code implementations25 May 2023 Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, Kun Kuang

With contrastive learning, we propose a learning potential-guided metric for domain heterogeneity by promoting learning variant features.

Contrastive Learning Domain Generalization

Universal Domain Adaptation via Compressive Attention Matching

no code implementations ICCV 2023 Didi Zhu, Yincuan Li, Junkun Yuan, Zexi Li, Kun Kuang, Chao Wu

To address this issue, we propose a Universal Attention Matching (UniAM) framework by exploiting the self-attention mechanism in vision transformer to capture the crucial object information.

Universal Domain Adaptation

MAP: Towards Balanced Generalization of IID and OOD through Model-Agnostic Adapters

1 code implementation ICCV 2023 Min Zhang, Junkun Yuan, Yue He, Wenbin Li, Zhengyu Chen, Kun Kuang

To achieve this goal, we apply a bilevel optimization to explicitly model and optimize the coupling relationship between the OOD model and auxiliary adapter layers.

Bilevel Optimization Inductive Bias

Domain Generalization via Contrastive Causal Learning

no code implementations6 Oct 2022 Qiaowei Miao, Junkun Yuan, Kun Kuang

Specifically, CCM is composed of three components: (i) domain-conditioned supervised learning which teaches CCM the correlation between images and labels, (ii) causal effect learning which helps CCM measure the true causal effects of images to labels, (iii) contrastive similarity learning which clusters the features of images that belong to the same class and provides the quantification of similarity.

Domain Generalization

Label-Efficient Domain Generalization via Collaborative Exploration and Generalization

no code implementations7 Aug 2022 Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin

To escape from the dilemma between domain generalization and annotation costs, in this paper, we introduce a novel task named label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains.

Domain Generalization

Attention-based Cross-Layer Domain Alignment for Unsupervised Domain Adaptation

no code implementations27 Feb 2022 Xu Ma, Junkun Yuan, Yen-Wei Chen, Ruofeng Tong, Lanfen Lin

To further boost model adaptation performance, we propose a novel method called Attention-based Cross-layer Domain Alignment (ACDA), which captures the semantic relationship between the source and target domains across model layers and calibrates each level of semantic information automatically through a dynamic attention mechanism.

Semantic Similarity Semantic Textual Similarity +1

Collaborative Semantic Aggregation and Calibration for Federated Domain Generalization

1 code implementation13 Oct 2021 Junkun Yuan, Xu Ma, Defang Chen, Fei Wu, Lanfen Lin, Kun Kuang

Domain generalization (DG) aims to learn from multiple known source domains a model that can generalize well to unknown target domains.

Domain Generalization

Instrumental Variable-Driven Domain Generalization with Unobserved Confounders

no code implementations4 Oct 2021 Junkun Yuan, Xu Ma, Ruoxuan Xiong, Mingming Gong, Xiangyu Liu, Fei Wu, Lanfen Lin, Kun Kuang

Meanwhile, the existing of unobserved confounders which affect the input features and labels simultaneously cause spurious correlation and hinder the learning of the invariant relationship contained in the conditional distribution.

Domain Generalization valid

Domain-Specific Bias Filtering for Single Labeled Domain Generalization

1 code implementation2 Oct 2021 Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin

In this paper, we investigate a Single Labeled Domain Generalization (SLDG) task with only one source domain being labeled, which is more practical and challenging than the CDG task.

Domain Generalization

Auto IV: Counterfactual Prediction via Automatic Instrumental Variable Decomposition

1 code implementation13 Jul 2021 Junkun Yuan, Anpeng Wu, Kun Kuang, Bo Li, Runze Wu, Fei Wu, Lanfen Lin

We also learn confounder representations by encouraging them to be relevant to both the treatment and the outcome.

Causal Inference counterfactual +1

Learning Decomposed Representation for Counterfactual Inference

1 code implementation12 Jun 2020 Anpeng Wu, Kun Kuang, Junkun Yuan, Bo Li, Runze Wu, Qiang Zhu, Yueting Zhuang, Fei Wu

The fundamental problem in treatment effect estimation from observational data is confounder identification and balancing.

counterfactual Counterfactual Inference

Subgraph Networks with Application to Structural Feature Space Expansion

no code implementations21 Mar 2019 Qi Xuan, Jinhuan Wang, Minghao Zhao, Junkun Yuan, Chenbo Fu, Zhongyuan Ruan, Guanrong Chen

In other words, the structural features of SGNs can complement that of the original network for better network classification, regardless of the feature extraction method used, such as the handcrafted, network embedding and kernel-based methods.

General Classification Graph Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.