no code implementations • 27 Dec 2023 • Woochul Kang
In this paper, we present an architectural pattern and training method for adaptive depth networks that can provide flexible accuracy-efficiency trade-offs in a single network.
no code implementations • 1 Jan 2021 • Woochul Kang, Daeyeon Kim
In the proposed ConvNet architecture, convolution layers are decomposed into a filter basis, that can be shared recursively, and layer-specific parts.
no code implementations • NeurIPS 2021 • Woochul Kang, Daeyeon Kim
In this paper, we present a recursive convolution block design and training method, in which a recursively shareable part, or a filter basis, is separated and learned while effectively avoiding the vanishing/exploding gradients problem during training.