We use spatially-sparse two, three and four dimensional convolutional autoencoder networks to model sparse structures in 2D space, 3D space, and 3+1=4 dimensional space-time.
Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification.
Ranked #50 on
Image Classification
on CIFAR-100
In this work, we demonstrate how to train an HTR system with few labeled data.
Recurrent neural networks (RNNs) are a powerful model for sequential data.
Ranked #15 on
Speech Recognition
on TIMIT
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition.
This is especially true for handwritten text recognition (HTR), where each author has a unique style, unlike printed text, where the variation is smaller by design.
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995.
Text recognition is a major computer vision task with a big set of associated challenges.
Unconstrained text recognition is an important computer vision task, featuring a wide variety of different sub-tasks, each with its own set of challenges.
HANDWRITING RECOGNITION LICENSE PLATE RECOGNITION SCENE TEXT SCENE TEXT RECOGNITION
Despite decades of research, offline handwriting recognition (HWR) of degraded historical documents remains a challenging problem, which if solved could greatly improve the searchability of online cultural heritage archives.