Active Learning
772 papers with code • 1 benchmarks • 15 datasets
Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads
Libraries
Use these libraries to find Active Learning models and implementationsDatasets
Latest papers
Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm
To improve the contextual learning capabilities of LLMs in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition.
Video Annotator: A framework for efficiently building video classifiers using vision-language models and active learning
High-quality and consistent annotations are fundamental to the successful development of robust machine learning models.
ActiveAnno3D - An Active Learning Framework for Multi-Modal 3D Object Detection
We propose ActiveAnno3D, an active learning framework to select data samples for labeling that are of maximum informativeness for training.
Foundation Model Makes Clustering A Better Initialization For Cold-Start Active Learning
In this work, we propose to integrate foundation models with clustering methods to select samples for cold-start active learning initialization.
Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain.
Conditional Normalizing Flows for Active Learning of Coarse-Grained Molecular Representations
Recently, instead of generating long molecular dynamics simulations, generative machine learning methods such as normalizing flows have been used to learn the Boltzmann distribution directly, without samples.
Automatic Segmentation of the Spinal Cord Nerve Rootlets
Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal cord.
SelectLLM: Can LLMs Select Important Instructions to Annotate?
However, how to select unlabelled instructions is not well-explored, especially in the context of LLMs.
Breaking the Barrier: Selective Uncertainty-based Active Learning for Medical Image Segmentation
This resolves the aforementioned disregard for target areas and redundancy.
A Study of Acquisition Functions for Medical Imaging Deep Active Learning
In this work, we show how active learning could be very effective in data scarcity situations, where obtaining labeled data (or annotation budget is very limited).