3 code implementations • 20 Sep 2021 • Jia Bi, Jonathon Hare, Geoff V. Merrett
When compared to GhostNet, inference latency on the Jetson Nano is improved by 1. 3x and 2x on the GPU and CPU respectively.
no code implementations • 17 Jul 2021 • Hishan Parry, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett
The new reduced design space results in a BLEU score increase of approximately 1% for sub-optimal models from the original design space, with a wide range for performance scaling between 0. 356s - 1. 526s for the GPU and 2. 9s - 7. 31s for the CPU.
1 code implementation • 8 May 2021 • Wei Lou, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett
However, the training process of such dynamic DNNs can be costly, since platform-aware models of different deployment scenarios must be retrained to become dynamic.
no code implementations • 19 Feb 2021 • Jia Bi, Steve R. Gunn
In this paper, we proposed a new technique, {\em variance controlled stochastic gradient} (VCSG), to improve the performance of the stochastic variance reduced gradient (SVRG) algorithm.
no code implementations • 13 May 2019 • Jia Bi, Steve R. Gunn
This paper proposes an integrated approach which can control the nature of the stochastic element in the optimizer and can balance the trade-off of estimator between the biased and unbiased by using a hyper-parameter.
no code implementations • ICLR 2018 • Jia Bi
Deep learning is becoming more widespread in its application due to its power in solving complex classification problems.