Small Data Image Classification
57 papers with code • 12 benchmarks • 9 datasets
Supervised image classification with tens to hundreds of labeled training examples.
Datasets
Most implemented papers
An Infinite Parade of Giraffes: Expressive Augmentation and Complexity Layers for Cartoon Drawing
In this paper, we explore creative image generation constrained by small data.
Physics-Constrained Deep Learning for High-dimensional Surrogate Modeling and Uncertainty Quantification without Labeled Data
Surrogate modeling and uncertainty quantification tasks for PDE systems are most often considered as supervised learning problems where input and output data pairs are used for training.
A physics-aware, probabilistic machine learning framework for coarse-graining high-dimensional systems in the Small Data regime
The automated construction of coarse-grained models represents a pivotal component in computer simulation of physical systems and is a key enabler in various analysis and design tasks related to uncertainty quantification.
MIRA: A Computational Neuro-Based Cognitive Architecture Applied to Movie Recommender Systems
The present project is inspired by the LIDA model to apply it to the process of movie recommendation, the model called MIRA (Movie Intelligent Recommender Agent) presented percentages of precision similar to a traditional model when submitted to the same assay conditions.
SSIM -A Deep Learning Approach for Recovering Missing Time Series Sensor Data
The SSIM uses the state-of-the-art sequence-to-sequence deep learning architecture, and the Long Short Term Memory Network is chosen to utilize both the past and future information for a given time.
Guided Source Separation Meets a Strong ASR Backend: Hitachi/Paderborn University Joint Investigation for Dinner Party ASR
In this paper, we present Hitachi and Paderborn University's joint effort for automatic speech recognition (ASR) in a dinner party scenario.
Data-efficient Neural Text Compression with Interactive Learning
Neural sequence-to-sequence models have been successfully applied to text compression.
Global Autoregressive Models for Data-Efficient Sequence Learning
In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the \emph{normalized} distribution associated with the GAM, and can be used for fast inference and evaluation.
Deep learning for Chemometric and non-translational data
We propose a novel method to train deep convolutional neural networks which learn from multiple data sets of varying input sizes through weight sharing.
Deep Kernels with Probabilistic Embeddings for Small-Data Learning
Experiments on a variety of datasets show that our approach outperforms the state-of-the-art in GP kernel learning in both supervised and semi-supervised settings.