Active Learning
760 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Most implemented papers
Bayesian Uncertainty and Expected Gradient Length -- Regression: Two Sides Of The Same Coin?
Subsequently, we show that expected gradient length in regression is equivalent to Bayesian uncertainty.
Active learning with MaskAL reduces annotation effort for training Mask R-CNN
In our study, MaskAL was compared to a random sampling method on a broccoli dataset with five visually similar classes.
Towards General and Efficient Active Learning
Existing work follows a cumbersome pipeline that repeats the time-consuming model training and batch data selection multiple times.
Active Learning by Feature Mixing
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions resulting from interventions on their representations.
Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning (Replicability Study)
Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently, saving computation and labeling costs.
Creating Custom Event Data Without Dictionaries: A Bag-of-Tricks
Event data, or structured records of ``who did what to whom'' that are automatically extracted from text, is an important source of data for scholars of international politics.
Let's Verify Step by Step
We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset.
Bayesian Active Learning for Classification and Preference Learning
Information theoretic active learning has been widely studied for probabilistic models.
Cooperative Inverse Reinforcement Learning
For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans.
A Tutorial on Thompson Sampling
Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance.