no code implementations • 13 Aug 2022 • Yiheng Lu, Ziyu Guan, Yaming Yang, Maoguo Gong, Wei Zhao, Kaiyuan Feng
By leveraging the proposed AFIE, the proposed framework is able to yield a stable importance evaluation of each filter no matter whether the original model is trained fully.
no code implementations • 9 Aug 2022 • Yiheng Lu, Maoguo Gong, Wei Zhao, Kaiyuan Feng, Hao Li
Therefore, we propose a sensitiveness based method to evaluate the importance of each layer from the perspective of inference accuracy by adding extra damage for the original model.
no code implementations • 20 Oct 2021 • Yunxiao Guo, Han Long, Xiaojun Duan, Kaiyuan Feng, Maochu Li, Xiaying Ma
As an algorithm based on deep reinforcement learning, Proximal Policy Optimization (PPO) performs well in many complex tasks and has become one of the most popular RL algorithms in recent years.