Active Learning

772 papers with code • 1 benchmarks • 15 datasets

Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model

Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads

Libraries

Use these libraries to find Active Learning models and implementations

Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm

flyingfeather/dea-sql 16 Feb 2024

To improve the contextual learning capabilities of LLMs in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition.

15
16 Feb 2024

Video Annotator: A framework for efficiently building video classifiers using vision-language models and active learning

netflix/videoannotator 9 Feb 2024

High-quality and consistent annotations are fundamental to the successful development of robust machine learning models.

19
09 Feb 2024

ActiveAnno3D - An Active Learning Framework for Multi-Modal 3D Object Detection

walzimmer/3d-bat 5 Feb 2024

We propose ActiveAnno3D, an active learning framework to select data samples for labeling that are of maximum informativeness for training.

594
05 Feb 2024

Foundation Model Makes Clustering A Better Initialization For Cold-Start Active Learning

han-yuan-med/foundation-model 4 Feb 2024

In this work, we propose to integrate foundation models with clustering methods to select samples for cold-start active learning initialization.

1
04 Feb 2024

Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees

wang-ml-lab/multi-domain-active-learning 3 Feb 2024

In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain.

3
03 Feb 2024

Conditional Normalizing Flows for Active Learning of Coarse-Grained Molecular Representations

aimat-lab/coarse-graining-al 2 Feb 2024

Recently, instead of generating long molecular dynamics simulations, generative machine learning methods such as normalizing flows have been used to learn the Boltzmann distribution directly, without samples.

1
02 Feb 2024

Automatic Segmentation of the Spinal Cord Nerve Rootlets

ivadomed/model-spinal-rootlets 1 Feb 2024

Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal cord.

4
01 Feb 2024

SelectLLM: Can LLMs Select Important Instructions to Annotate?

minnesotanlp/select-llm 29 Jan 2024

However, how to select unlabelled instructions is not well-explored, especially in the context of LLMs.

10
29 Jan 2024

Breaking the Barrier: Selective Uncertainty-based Active Learning for Medical Image Segmentation

HelenMa9998/Selective_Uncertainty_AL 29 Jan 2024

This resolves the aforementioned disregard for target areas and redundancy.

2
29 Jan 2024

A Study of Acquisition Functions for Medical Imaging Deep Active Learning

bonaventuredossou/ece526_course_project 28 Jan 2024

In this work, we show how active learning could be very effective in data scarcity situations, where obtaining labeled data (or annotation budget is very limited).

3
28 Jan 2024