no code implementations • 31 Dec 2023 • Vardaan Pahuja, Weidi Luo, Yu Gu, Cheng-Hao Tu, Hong-You Chen, Tanya Berger-Wolf, Charles Stewart, Song Gao, Wei-Lun Chao, Yu Su
In this work, we leverage the structured context associated with the camera trap images to improve out-of-distribution generalization for the task of species identification in camera traps.
1 code implementation • 14 Mar 2023 • Cheng-Hao Tu, Hong-You Chen, David Carlyn, Wei-Lun Chao
Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e. g., clouds and plants).
1 code implementation • CVPR 2023 • Cheng-Hao Tu, Zheda Mai, Wei-Lun Chao
Through introducing a handful of learnable ``query'' tokens to each layer, VQT leverages the inner workings of Transformers to ``summarize'' rich intermediate features of each layer, which can then be used to train the prediction heads of downstream tasks.
1 code implementation • 23 Jun 2022 • Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, Wei-Lun Chao
To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably.
1 code implementation • 7 Dec 2020 • Cheng-Hao Tu, Cheng-En Wu, Chu-Song Chen
Although CondConv is effective for the performance enhancement of a deep model, it is currently applied to individual tasks only.
Ranked #1 on Continual Learning on Flowers (Fine-grained 6 Tasks)
1 code implementation • NeurIPS 2019 • Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, Chu-Song Chen
First, it can avoid forgetting (i. e., learn new tasks while remembering all previous tasks).