Active Learning
754 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Most implemented papers
Synbols: Probing Learning Algorithms with Synthetic Datasets
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms.
Deep Deterministic Uncertainty: A Simple Baseline
Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.
Cost-Effective Active Learning for Deep Image Classification
In this paper, we propose a novel active learning framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner.
Less is more: sampling chemical space with active learning
In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials.
An Overview and a Benchmark of Active Learning for Outlier Detection with One-Class Classifiers
This article starts with a categorization of the various methods.
ALiPy: Active Learning in Python
Supervised machine learning methods usually require a large set of labeled examples for model training.
BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning
We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning.
Deep Active Learning for Axon-Myelin Segmentation on Histology Data
In this paper we provide a framework for Deep Active Learning applied to a real-world scenario.
Bayesian Force Fields from Active Learning for Simulation of Inter-Dimensional Transformation of Stanene
We present a way to dramatically accelerate Gaussian process models for interatomic force fields based on many-body kernels by mapping both forces and uncertainties onto functions of low-dimensional features.
Can Active Learning Preemptively Mitigate Fairness Issues?
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.