no code implementations • 6 Nov 2020 • Hock Hung Chieng, Noorhaniza Wahid, Pauline Ong
However, ReLU contains several shortcomings that can result in inefficient training of the deep neural networks, these are: 1) the negative cancellation property of ReLU tends to treat negative inputs as unimportant information for the learning, resulting in a performance degradation; 2) the inherent predefined nature of ReLU is unlikely to promote additional flexibility, expressivity, and robustness to the networks; 3) the mean activation of ReLU is highly positive and leads to bias shift effect in network layers; and 4) the multilinear structure of ReLU restricts the non-linear approximation power of the networks.
2 code implementations • 15 Dec 2018 • Hock Hung Chieng, Noorhaniza Wahid, Pauline Ong, Sai Raj Kishore Perla
To verify its performance, this study evaluates FTS with ReLU and several recent activation functions.