Search Results for author: Sebastian Pineda Arango

Found 9 papers, 7 papers with code

Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How

1 code implementation6 Jun 2023 Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter, Josif Grabocka

With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset.

Hyperparameter Optimization Image Classification

Deep Pipeline Embeddings for AutoML

1 code implementation23 May 2023 Sebastian Pineda Arango, Josif Grabocka

As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline.

Automatic Machine Learning Model Selection Bayesian Optimization +2

Breaking the Paradox of Explainable Deep Learning

1 code implementation22 May 2023 Arlind Kadra, Sebastian Pineda Arango, Josif Grabocka

Through extensive experiments, we demonstrate that our explainable deep networks are as accurate as state-of-the-art classifiers on tabular data.

Deep Ranking Ensembles for Hyperparameter Optimization

1 code implementation27 Mar 2023 Abdus Salam Khazi, Sebastian Pineda Arango, Josif Grabocka

Automatically optimizing the hyperparameters of Machine Learning algorithms is one of the primary open questions in AI.

Hyperparameter Optimization Learning-To-Rank

Transformers Can Do Bayesian Inference

1 code implementation ICLR 2022 Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, Frank Hutter

Our method restates the objective of posterior approximation as a supervised classification problem with a set-valued input: it repeatedly draws a task (or function) from the prior, draws a set of data points and their labels from it, masks one of the labels and learns to make probabilistic predictions for it based on the set-valued input of the rest of the data points.

AutoML Bayesian Inference +2

Transfer Learning for Bayesian HPO with End-to-End Meta-Features

no code implementations29 Sep 2021 Hadi Samer Jomaa, Sebastian Pineda Arango, Lars Schmidt-Thieme, Josif Grabocka

As a result, our novel DKLM can learn contextualized dataset-specific similarity representations for hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Multimodal Meta-Learning for Time Series Regression

no code implementations5 Aug 2021 Sebastian Pineda Arango, Felix Heinrich, Kiran Madhusudhanan, Lars Schmidt-Thieme

Recent work has shown the efficiency of deep learning models such as Fully Convolutional Networks (FCN) or Recurrent Neural Networks (RNN) to deal with Time Series Regression (TSR) problems.

Meta-Learning regression +2

HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML

1 code implementation11 Jun 2021 Sebastian Pineda Arango, Hadi S. Jomaa, Martin Wistuba, Josif Grabocka

Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.