Handwritten Digit Recognition

22 papers with code • 1 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

LipschitzLR: Using theoretically computed adaptive learning rates for fast convergence

yrahul3910/adaptive-lr-dnn 20 Feb 2019

In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.

How Important is Weight Symmetry in Backpropagation?

jsalbert/biotorch 17 Oct 2015

Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes.

Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML

hls-fpga-machine-learning/hls4ml 11 Mar 2020

We discuss the trade-off between model accuracy and resource consumption.

MNIST-MIX: A Multi-language Handwritten Digit Recognition Dataset

jwwthu/MNIST-MIX 8 Apr 2020

In this letter, we contribute a multi-language handwritten digit recognition dataset named MNIST-MIX, which is the largest dataset of the same type in terms of both languages and data samples.

Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition

KyotoSunshine/CNN-for-handwritten-kanji 1 Mar 2010

Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0. 35% error rate on the famous MNIST handwritten digits benchmark.

A neuromorphic hardware architecture using the Neural Engineering Framework for pattern recognition

Brain-Inspired-Computing/Final-Project 21 Jul 2015

The architecture is not limited to handwriting recognition, but is generally applicable as an extremely fast pattern recognition processor for various kinds of patterns such as speech and images.

Large-scale Artificial Neural Network: MapReduce-based Deep Learning

sunkairan/MapReduce-Based-Deep-Learning 9 Oct 2015

Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload.

Group Sparse Regularization for Deep Neural Networks

ispamm/group-lasso-deep-networks 2 Jul 2016

In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

progirep/planet 3 May 2017

We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving.