no code implementations • ICLR 2018 • Ricky Fok, Aijun An, Xiaogang Wang
In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results.
no code implementations • ICLR 2018 • Ricky Fok, Aijun An, Zana Rashidi, Xiaogang Wang
We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.
no code implementations • 17 Oct 2017 • Ricky Fok, Aijun An, Xiaogang Wang
We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory.
no code implementations • 9 Sep 2017 • Ricky Fok, Aijun An, Xiaogang Wang
The global optimization method first reduces a high dimensional search to an one dimensional geodesic to find a starting point close to a local mode.