1 code implementation • 11 Aug 2022 • Zejiang Hou, Fei Sun, Yen-Kuang Chen, Yuan Xie, Sun-Yuan Kung
When the masked autoencoder is pretrained and finetuned on ImageNet-1K dataset with an input resolution of 224x224, MILAN achieves a top-1 accuracy of 85. 4% on ViT-Base, surpassing previous state-of-the-arts by 1%.
1 code implementation • 7 Jul 2022 • Zejiang Hou, Julian Salazar, George Polovets
Large pretrained language models (PLMs) are often domain- or task-adapted via fine-tuning or prompting.
1 code implementation • CVPR 2022 • Zejiang Hou, Minghai Qin, Fei Sun, Xiaolong Ma, Kun Yuan, Yi Xu, Yen-Kuang Chen, Rong Jin, Yuan Xie, Sun-Yuan Kung
However, conventional pruning methods have limitations in that: they are restricted to pruning process only, and they require a fully pre-trained large model.
1 code implementation • 31 Dec 2021 • Zejiang Hou, Sun-Yuan Kung
In contrast, we advocate a multi-dimensional ViT compression paradigm, and propose to harness the redundancy reduction from attention head, neuron and sequence dimensions jointly.
no code implementations • 7 Sep 2021 • Zejiang Hou, Sun-Yuan Kung
We study the few-shot learning (FSL) problem, where a model learns to recognize new objects with extremely few labeled training data per category.
1 code implementation • ICLR 2022 • Xiaolong Ma, Minghai Qin, Fei Sun, Zejiang Hou, Kun Yuan, Yi Xu, Yanzhi Wang, Yen-Kuang Chen, Rong Jin, Yuan Xie
It addresses the shortcomings of the previous works by repeatedly growing a subset of layers to dense and then pruning them back to sparse after some training.
no code implementations • 28 May 2020 • Zejiang Hou, Sun-Yuan Kung
Network pruning has become the de facto tool to accelerate deep neural networks for mobile and edge applications.
no code implementations • 23 Sep 2019 • Mert Al, Zejiang Hou, Sun-Yuan Kung
Kernel approximation methods create explicit, low-dimensional kernel feature maps to deal with the high computational and memory complexity of standard techniques.