AutoML

240 papers with code • 2 benchmarks • 7 datasets

Automated Machine Learning (AutoML) is a general concept which covers diverse techniques for automated model learning including automatic data preprocessing, architecture search, and model selection. Source: Evaluating recommender systems for AI-driven data science (1905.09205)

Source: CHOPT : Automated Hyperparameter Optimization Framework for Cloud-Based Machine Learning Platforms

Libraries

Use these libraries to find AutoML models and implementations
14 papers
139
5 papers
7,260
4 papers
7,267
See all 13 libraries.

Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML

tess022095/fair-automl 15 Jun 2023

In order to demonstrate the effectiveness of our approach, we evaluated our approach on four fairness problems and 16 different ML models, and our results show a significant improvement over the baseline and existing bias mitigation techniques.

1
15 Jun 2023

Hyperparameters in Reinforcement Learning and How To Tune Them

facebookresearch/how-to-autorl 2 Jun 2023

In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting.

55
02 Jun 2023

PFNs4BO: In-Context Learning for Bayesian Optimization

automl/pfns4bo 27 May 2023

In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO).

17
27 May 2023

Deep Pipeline Embeddings for AutoML

releaunifreiburg/DeepPipe 23 May 2023

As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline.

15
23 May 2023

Learning Activation Functions for Sparse Neural Networks

automl/safs 18 May 2023

By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15. 53%, 8. 88%, and 6. 33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.

2
18 May 2023

XTab: Cross-table Pretraining for Tabular Transformers

bingzhaozhu/xtab 10 May 2023

The success of self-supervised learning in computer vision and natural language processing has motivated pretraining methods on tabular data.

24
10 May 2023

EA-HAS-Bench:Energy-Aware Hyperparameter and Architecture Search Benchmark

microsoft/EA-HAS-Bench The Eleventh International Conference on Learning Representations 2023

The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality.

15
01 May 2023

MLCopilot: Unleashing the Power of Large Language Models in Solving Machine Learning Tasks

microsoft/CoML 28 Apr 2023

In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches.

61
28 Apr 2023

Deep Fast Vision: Accelerated Deep Transfer Learning Vision Prototyping and Beyond

fabprezja/deep-fast-vision Zenodo GitHub 2023

Deep Fast Vision is a versatile Python library for rapid prototyping of deep transfer learning vision models.

13
26 Apr 2023

Optimizing Neural Networks through Activation Function Discovery and Automatic Weight Initialization

cognizant-ai-labs/aquasurf 6 Apr 2023

While present methods focus on hyperparameters and neural network topologies, other aspects of neural network design can be optimized as well.

6
06 Apr 2023