AutoML
240 papers with code • 2 benchmarks • 7 datasets
Automated Machine Learning (AutoML) is a general concept which covers diverse techniques for automated model learning including automatic data preprocessing, architecture search, and model selection. Source: Evaluating recommender systems for AI-driven data science (1905.09205)
Source: CHOPT : Automated Hyperparameter Optimization Framework for Cloud-Based Machine Learning Platforms
Libraries
Use these libraries to find AutoML models and implementationsLatest papers
Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML
In order to demonstrate the effectiveness of our approach, we evaluated our approach on four fairness problems and 16 different ML models, and our results show a significant improvement over the baseline and existing bias mitigation techniques.
Hyperparameters in Reinforcement Learning and How To Tune Them
In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting.
PFNs4BO: In-Context Learning for Bayesian Optimization
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO).
Deep Pipeline Embeddings for AutoML
As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline.
Learning Activation Functions for Sparse Neural Networks
By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15. 53%, 8. 88%, and 6. 33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.
XTab: Cross-table Pretraining for Tabular Transformers
The success of self-supervised learning in computer vision and natural language processing has motivated pretraining methods on tabular data.
EA-HAS-Bench:Energy-Aware Hyperparameter and Architecture Search Benchmark
The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality.
MLCopilot: Unleashing the Power of Large Language Models in Solving Machine Learning Tasks
In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches.
Deep Fast Vision: Accelerated Deep Transfer Learning Vision Prototyping and Beyond
Deep Fast Vision is a versatile Python library for rapid prototyping of deep transfer learning vision models.
Optimizing Neural Networks through Activation Function Discovery and Automatic Weight Initialization
While present methods focus on hyperparameters and neural network topologies, other aspects of neural network design can be optimized as well.