Search Results for author: Hiroshi Inoue

Found 4 papers, 1 papers with code

Multi-Sample Dropout for Accelerated Training and Better Generalization

7 code implementations23 May 2019 Hiroshi Inoue

The additional computation cost due to the duplicated operations is not significant for deep convolutional networks because most of the computation time is consumed in the convolution layers before the dropout layer, which are not duplicated.

Image Classification

Data Augmentation by Pairing Samples for Images Classification

no code implementations ICLR 2018 Hiroshi Inoue

This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33. 5% to 29. 0% for the ILSVRC 2012 dataset with GoogLeNet and from 8. 22% to 6. 93% in the CIFAR-10 dataset.

Classification Data Augmentation +2

Fast and Accurate Inference with Adaptive Ensemble Prediction for Deep Networks

no code implementations ICLR 2018 Hiroshi Inoue

One obvious drawback of the ensembling technique is its higher execution cost during inference.% If we average 100 local predictions, the execution cost will be 100 times as high as the cost without the ensemble.

Image Classification

Adaptive Ensemble Prediction for Deep Neural Networks based on Confidence Level

no code implementations27 Feb 2017 Hiroshi Inoue

In this paper, we first describe our insights on the relationship between the probability of prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability even when there is a non-negligible number of mispredicted inputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.