Search Results for author: Hisahiro Suganuma

Found 2 papers, 0 papers with code

MRL: Learning to Mix with Attention and Convolutions

no code implementations30 Aug 2022 Shlok Mohta, Hisahiro Suganuma, Yoshiki Tanaka

To achieve an efficient mix, we exploit the domain-wide receptive field provided by self-attention for regional-scale mixing and convolutional kernels restricted to local scale for local-scale mixing.

Histopathological Segmentation Inductive Bias +3

Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash

no code implementations13 Nov 2018 Hiroaki Mikami, Hisahiro Suganuma, Pongsakorn U-chupala, Yoshiki Tanaka, Yuichi Kageyama

Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization.

Cannot find the paper you are looking for? You can Submit a new open access paper.