Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions.
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.
In this paper, to relieve the overfitting effect of ResNet and its improvements (i. e., Wide ResNet, PyramidNet, and ResNeXt), we propose a new regularization method called ShakeDrop regularization.
Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Ranked #1 on Image Classification on cifar100
We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution).
Ranked #5 on Classification on InDL
With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks.
We propose a novel approach for instance-level image retrieval.
Ranked #3 on Image Retrieval on Oxf105k
The popular Q-learning algorithm is known to overestimate action values under certain conditions.
Ranked #10 on Atari Games on Atari 2600 Gopher
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Ranked #9 on Image Clustering on Tiny-ImageNet
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning.
Ranked #1 on Atari Games on Atari 2600 Pong