Search Results for author: Ting Hua

Found 8 papers, 0 papers with code

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

no code implementations12 Apr 2023 James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira, Yilin Shen, Hongxia Jin

We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification.

Continual Learning Image Classification

Numerical Optimizations for Weighted Low-rank Estimation on Language Model

no code implementations2 Nov 2022 Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, Hongxia Jin

However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.

Language Modelling

Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

no code implementations NAACL 2021 Ting Hua, Yilin Shen, Changsheng Zhao, Yen-Chang Hsu, Hongxia Jin

Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different.

Continual Learning domain classification +1

Lite-MDETR: A Lightweight Multi-Modal Detector

no code implementations CVPR 2022 Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin

The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.

object-detection Object Detection +3

Automatic Mixed-Precision Quantization Search of BERT

no code implementations30 Dec 2021 Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin

Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.

Knowledge Distillation Model Compression +2

DictFormer: Tiny Transformer with Shared Dictionary

no code implementations ICLR 2022 Qian Lou, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin

DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.

Abstractive Text Summarization Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.