no code implementations • 11 Apr 2024 • Haotian Zhang, Haoxuan You, Philipp Dufter, BoWen Zhang, Chen Chen, Hong-You Chen, Tsu-Jui Fu, William Yang Wang, Shih-Fu Chang, Zhe Gan, Yinfei Yang
While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks.
Ranked #60 on Visual Question Answering on MM-Vet
no code implementations • 31 Dec 2023 • Vardaan Pahuja, Weidi Luo, Yu Gu, Cheng-Hao Tu, Hong-You Chen, Tanya Berger-Wolf, Charles Stewart, Song Gao, Wei-Lun Chao, Yu Su
In this work, we leverage the structured context associated with the camera trap images to improve out-of-distribution generalization for the task of species identification in camera traps.
no code implementations • 16 Apr 2023 • Hong-You Chen, Jike Zhong, Mingda Zhang, Xuhui Jia, Hang Qi, Boqing Gong, Wei-Lun Chao, Li Zhang
FedBasis learns a set of few shareable ``basis'' models, which can be linearly combined to form personalized models for clients.
1 code implementation • 14 Mar 2023 • Cheng-Hao Tu, Hong-You Chen, David Carlyn, Wei-Lun Chao
Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e. g., clouds and plants).
no code implementations • 12 Mar 2023 • Jike Zhong, Hong-You Chen, Wei-Lun Chao
We reinvestigate factors that are believed to cause this problem, including the mismatch of BN statistics across clients and the deviation of gradients during local training.
no code implementations • CVPR 2023 • Hong-You Chen, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang
We study the problem of how to train a "personalization-friendly" model such that given only the task descriptions, the model can be adapted to different end-users' needs, e. g., for accurately classifying different subsets of objects.
1 code implementation • NeurIPS 2021 • Hong-You Chen, Wei-Lun Chao
This coarse domain sequence then undergoes a fine indexing step via a novel cycle-consistency loss, which encourages the next intermediate domain to preserve sufficient discriminative knowledge of the current intermediate domain.
1 code implementation • 23 Jun 2022 • Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, Wei-Lun Chao
To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably.
3 code implementations • ICLR 2022 • Hong-You Chen, Wei-Lun Chao
On the one hand, we introduce a family of losses that are robust to non-identical class distributions, enabling clients to train a generic predictor with a consistent objective across them.
2 code implementations • ICLR 2021 • Hong-You Chen, Wei-Lun Chao
Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.
no code implementations • ACL 2020 • Hong-You Chen, Sz-Han Yu, Shou-De Lin
Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus.
1 code implementation • 6 Jan 2020 • Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, Wei-Lun Chao
Classifiers trained with class-imbalanced data are known to perform poorly on test data of the "minor" classes, of which we have insufficient training data.
no code implementations • IJCNLP 2019 • Chih-Te Lai, Yi-Te Hong, Hong-You Chen, Chi-Jen Lu, Shou-De Lin
The objective of non-parallel text style transfer, or controllable text generation, is to alter specific attributes (e. g. sentiment, mood, tense, politeness, etc) of a given text while preserving its remaining attributes and content.
no code implementations • NAACL 2019 • Hong-You Chen, Chin-Hua Hu, Leila Wehbe, Shou-De Lin
Unsupervised document representation learning is an important task providing pre-trained features for NLP applications.
no code implementations • ICLR 2019 • Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar
State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.
no code implementations • EMNLP 2018 • Hong-You Chen, Cheng-Syuan Lee, Keng-Te Liao, Shou-De Lin
Lexicon relation extraction given distributional representation of words is an important topic in NLP.