2 code implementations • 29 Mar 2024 • Xu Ma, Xiyang Dai, Yue Bai, Yizhou Wang, Yun Fu
Recent studies have drawn attention to the untapped potential of the "star operation" (element-wise multiplication) in network design.
1 code implementation • 14 Mar 2024 • Yitian Zhang, Yue Bai, Huan Wang, Yizhou Wang, Yun Fu
Current training pipelines in object recognition neglect Hue Jittering when doing data augmentation as it not only brings appearance changes that are detrimental to classification, but also the implementation is inefficient in practice.
no code implementations • NeurIPS 2023 • Jianglin Lu, Yi Xu, Huan Wang, Yue Bai, Yun Fu
We begin by defining the pivotal nodes as $k$-hop starved nodes, which can be identified based on a given adjacency matrix.
2 code implementations • CVPR 2023 • Yitian Zhang, Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu
To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly.
1 code implementation • 28 Jan 2023 • Yizhou Wang, Can Qin, Yue Bai, Yi Xu, Xu Ma, Yun Fu
With the same perturbation magnitude, the testing reconstruction error of the normal frames lowers more than that of the abnormal frames, which contributes to mitigating the overfitting problem of reconstruction.
2 code implementations • 12 Jan 2023 • Huan Wang, Can Qin, Yue Bai, Yun Fu
The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3].
1 code implementation • 18 Nov 2022 • Yitian Zhang, Yue Bai, Huan Wang, Yi Xu, Yun Fu
To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation.
1 code implementation • 13 Oct 2022 • Yue Bai, Huan Wang, Xu Ma, Yitian Zhang, Zhiqiang Tao, Yun Fu
We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures.
1 code implementation • ICLR 2022 • Yue Bai, Huan Wang, Zhiqiang Tao, Kunpeng Li, Yun Fu
In this work, we regard the winning ticket from LTH as the subnetwork which is in trainable condition and its performance as our benchmark, then go from a complementary direction to articulate the Dual Lottery Ticket Hypothesis (DLTH): Randomly selected subnetworks from a randomly initialized dense network can be transformed into a trainable condition and achieve admirable performance compared with LTH -- random tickets in a given lottery pool can be transformed into winning tickets.
1 code implementation • 25 Nov 2021 • Yizhou Wang, Can Qin, Rongzhe Wei, Yi Xu, Yue Bai, Yun Fu
Next we add adversarial perturbation to the transformed features to decrease their softmax scores of the predicted labels and design anomaly scores based on the predictive uncertainties of the classifier on these perturbed features.
2 code implementations • 12 Oct 2021 • Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li, Yun Fu
Current Sign Language Recognition (SLR) methods usually extract features via deep neural networks and suffer overfitting due to limited and noisy data.
no code implementations • 29 Sep 2021 • Huan Wang, Can Qin, Yue Bai, Yun Fu
Several recent works questioned the value of inheriting weight in structured neural network pruning because they empirically found training from scratch can match or even outperform finetuning a pruned model.
no code implementations • 12 May 2021 • Huan Wang, Can Qin, Yue Bai, Yun Fu
This paper is meant to explain it through the lens of dynamical isometry [42].
3 code implementations • 16 Mar 2021 • Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li, Yun Fu
Sign language is commonly used by deaf or speech impaired people to communicate but requires significant effort to master.
Ranked #2 on Sign Language Recognition on WLASL-2000
2 code implementations • 11 Mar 2021 • Huan Wang, Can Qin, Yue Bai, Yulun Zhang, Yun Fu
Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to prune a randomly initialized network.
no code implementations • 7 Dec 2020 • Yu Yin, Joseph P. Robinson, Songyao Jiang, Yue Bai, Can Qin, Yun Fu
Even as impressive milestones are achieved in synthesizing faces, the importance of preserving identity is needed in practice and should not be overlooked.
no code implementations • 14 Sep 2020 • Yue Bai, Zhiqiang Tao, Lichen Wang, Sheng Li, Yu Yin, Yun Fu
Extensive experiments on four action datasets illustrate the proposed CAM achieves better results for each view and also boosts multi-view performance.
no code implementations • 24 Nov 2019 • Yue Bai, Lichen Wang, Zhiqiang Tao, Sheng Li, Yun Fu
Multi-view time series classification (MVTSC) aims to improve the performance by fusing the distinctive temporal information from multiple views.
no code implementations • 27 Jun 2019 • Yue Bai, Leo L. Duan
In representation learning and non-linear dimension reduction, there is a huge interest to learn the 'disentangled' latent variables, where each sub-coordinate almost uniquely controls a facet of the observed data.
3 code implementations • 1 Oct 2018 • Yi Zhou, Yue Bai, Shuvra S. Bhattacharyya, Heikki Huttunen
In this work we propose a framework for improving the performance of any deep neural network that may suffer from vanishing gradients.
no code implementations • 2 Jul 2018 • Yue Bai, Shuvra S. Bhattacharyya, Antti P. Happonen, Heikki Huttunen
We propose a new framework for image classification with deep neural networks.