Search Results for author: Guang-Yuan Hao

Found 7 papers, 5 papers with code

Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees

1 code implementation3 Feb 2024 Guang-Yuan Hao, Hengguan Huang, Haotian Wang, Jie Gao, Hao Wang

In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain.

Active Learning

Natural Counterfactuals With Necessary Backtracking

no code implementations2 Feb 2024 Guang-Yuan Hao, Jiji Zhang, Biwei Huang, Hao Wang, Kun Zhang

Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions.

counterfactual Counterfactual Reasoning

Taxonomy-Structured Domain Adaptation

2 code implementations13 Jun 2023 Tianyi Liu, Zihao Xu, Hao He, Guang-Yuan Hao, Guang-He Lee, Hao Wang

Domain adaptation aims to mitigate distribution shifts among different domains.

Domain Adaptation

Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation

4 code implementations6 Feb 2023 Zihao Xu, Guang-Yuan Hao, Hao He, Hao Wang

To address this challenge, we first provide a formal definition of domain index from the probabilistic perspective, and then propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data, thereby providing additional insight on domain relations and improving domain adaptation performance.

Domain Adaptation

DSRGAN: Explicitly Learning Disentangled Representation of Underlying Structure and Rendering for Image Generation without Tuple Supervision

no code implementations30 Sep 2019 Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

We focus on explicitly learning disentangled representation for natural image generation, where the underlying spatial structure and the rendering on the structure can be independently controlled respectively, yet using no tuple supervision.

Image Generation

MIXGAN: Learning Concepts from Different Domains for Mixture Generation

1 code implementation4 Jul 2018 Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e. g., content and style) from different domains and thus generating a new domain with learned concepts.

Generative Adversarial Network Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.