no code implementations • 18 Mar 2024 • Mingkui Tan, Guohao Chen, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Peilin Zhao, Shuaicheng Niu
To tackle this, we further propose EATA with Calibration (EATA-C) to separately exploit the reducible model uncertainty and the inherent data uncertainty for calibrated TTA.
1 code implementation • 27 Feb 2024 • Yaofo Chen, Shuaicheng Niu, Shoukai Xu, Hengjie Song, YaoWei Wang, Mingkui Tan
Moreover, with the increasing data collected at the edge, this paradigm also fails to further adapt the cloud model for better performance.
1 code implementation • 24 Feb 2023 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, Mingkui Tan
In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability.
1 code implementation • 31 Oct 2022 • Yaofo Chen, Yong Guo, Daihai Liao, Fanbing Lv, Hengjie Song, Mingkui Tan
Then, we perform a local search within the evoked subspace to find an effective architecture.
Ranked #2 on Neural Architecture Search on NAS-Bench-201, ImageNet-16-120 (Accuracy (Val) metric)
1 code implementation • 14 Oct 2022 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
More critically, these independent search processes cannot share their learned knowledge (i. e., the distribution of good architectures) with each other and thus often result in limited search results.
1 code implementation • 6 Apr 2022 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, Mingkui Tan
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testing data by adapting a given model w. r. t.
1 code implementation • 30 Jun 2021 • Yong Guo, Yaofo Chen, Mingkui Tan, Kui Jia, Jian Chen, Jingdong Wang
In practice, the convolutional operation on some of the windows (e. g., smooth windows that contain very similar pixels) can be very redundant and may introduce noises into the computation.
1 code implementation • CVPR 2021 • Yaofo Chen, Yong Guo, Qi Chen, Minli Li, Wei Zeng, YaoWei Wang, Mingkui Tan
One of the key steps in Neural Architecture Search (NAS) is to estimate the performance of candidate architectures.
no code implementations • 27 Feb 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To this end, we propose a Pareto-Frontier-aware Neural Architecture Generator (NAG) which takes an arbitrary budget as input and produces the Pareto optimal architecture for the target budget.
no code implementations • 1 Jan 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary.
1 code implementation • ICML 2020 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.