no code implementations • 4 Apr 2024 • Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin Wright
Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome.
1 code implementation • 3 Apr 2024 • Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio
Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.
no code implementations • 19 Mar 2024 • Philipp Kopper, David Rügamer, Raphael Sonabend, Bernd Bischl, Andreas Bender
Survival Analysis provides critical insights for partially incomplete time-to-event data in various domains.
no code implementations • 7 Mar 2024 • Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio
We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.
no code implementations • 2 Feb 2024 • Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer
A major challenge in sample-based inference (SBI) for Bayesian neural networks is the size and structure of the networks' parameter space.
no code implementations • 20 Dec 2023 • Christian A. Scholbeck, Julia Moosbauer, Giuseppe Casalicchio, Hoshin Gupta, Bernd Bischl, Christian Heumann
We argue that interpretations of machine learning (ML) models or the model-building process can bee seen as a form of sensitivity analysis (SA), a general methodology used to explain complex systems in many fields such as environmental modeling, engineering, or economics.
1 code implementation • 26 Nov 2023 • Jann Goschenhofer, Bernd Bischl, Zsolt Kira
Constrained clustering allows the training of classification models using pairwise constraints only, which are weak and relatively easy to mine, while still yielding full-supervision-level model performance.
no code implementations • 2 Nov 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
Materials and Methods: An orthogonalization is utilized to remove the influence of protected features (e. g., age, sex, race) in chest radiograph embeddings, ensuring feature-independent results.
no code implementations • 23 Oct 2023 • Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix
Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.
no code implementations • 10 Oct 2023 • Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip Torr, Ashkan Khakzar, Kenji Kawaguchi
Feature attribution explains neural network outputs by identifying relevant input features.
no code implementations • 3 Oct 2023 • Holger Löwe, Christian A. Scholbeck, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio
Forward marginal effects (FMEs) have recently been introduced as a versatile and effective model-agnostic interpretation method.
no code implementations • 5 Sep 2023 • Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations.
no code implementations • 28 Aug 2023 • Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning.
no code implementations • 17 Aug 2023 • Yawei Li, Yang Zhang, Kenji Kawaguchi, Ashkan Khakzar, Bernd Bischl, Mina Rezaei
We apply these metrics to mainstream attribution methods, offering a novel lens through which to analyze and compare feature attribution methods.
no code implementations • 17 Jul 2023 • Lennart Purucker, Lennart Schneider, Marie Anastacio, Joeran Beel, Bernd Bischl, Holger Hoos
Automated machine learning (AutoML) systems commonly ensemble models post hoc to improve predictive performance, typically via greedy ensemble selection (GES).
1 code implementation • 17 Jul 2023 • Lennart Schneider, Bernd Bischl, Janek Thomas
Efficient optimization is achieved via augmentation of the search space of the learning algorithm by incorporating feature selection, interaction and monotonicity constraints into the hyperparameter search space.
1 code implementation • 14 Jul 2023 • Ibrahim Tolga Öztürk, Rostislav Nedelchev, Christian Heumann, Esteban Garces Arias, Marius Roger, Bernd Bischl, Matthias Aßenmacher
Recent studies have demonstrated how to assess the stereotypical bias in pre-trained English language models.
no code implementations • 7 Jul 2023 • Chris Kolb, Christian L. Müller, Bernd Bischl, David Rügamer
This is particularly useful in non-convex regularization, where finding global solutions is NP-hard and local minima often generalize well.
1 code implementation • 16 Jun 2023 • Lukas Rauch, Matthias Aßenmacher, Denis Huseljic, Moritz Wirth, Bernd Bischl, Bernhard Sick
Deep active learning (DAL) seeks to reduce annotation costs by enabling the model to actively query instance annotations from which it expects to learn the most.
2 code implementations • 1 Jun 2023 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
We formally introduce generalized additive decomposition of global effects (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized.
no code implementations • 25 May 2023 • Daniel Saggau, Mina Rezaei, Bernd Bischl, Ilias Chalkidis
Learning quality document embeddings is a fundamental problem in natural language processing (NLP), information retrieval (IR), recommendation systems, and search engines.
1 code implementation • 25 May 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
Undersampling is a common method in Magnetic Resonance Imaging (MRI) to subsample the number of data points in k-space, reducing acquisition times at the cost of decreased image quality.
1 code implementation • 24 May 2023 • Simon Wiegrebe, Philipp Kopper, Raphael Sonabend, Bernd Bischl, Andreas Bender
The influx of deep learning (DL) techniques into the field of survival analysis in recent years has led to substantial methodological progress; for instance, learning from unstructured or high-dimensional data such as images, text or omics data.
no code implementations • 4 May 2023 • Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann
This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.
no code implementations • 14 Apr 2023 • Felix Ott, Lucas Heublein, David Rügamer, Bernd Bischl, Christopher Mutschler
In this work, we propose recurrent fusion networks to optimally align absolute and relative pose predictions to improve the absolute pose prediction.
no code implementations • 13 Apr 2023 • Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio
Counterfactual explanation methods provide information on how feature values of individual observations must be changed to obtain a desired prediction.
no code implementations • 6 Apr 2023 • Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer
Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape.
2 code implementations • 20 Mar 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
While recent advances in large-scale foundational models show promising results, their application to the medical domain has not yet been explored in detail.
no code implementations • 15 Mar 2023 • Hilde Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices.
no code implementations • 16 Jan 2023 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
The goal of domain adaptation (DA) is to mitigate this domain shift problem by searching for an optimal feature transformation to learn a domain-invariant representation.
1 code implementation • 8 Dec 2022 • Matthias Feurer, Katharina Eggensperger, Edward Bergman, Florian Pfisterer, Bernd Bischl, Frank Hutter
Modern machine learning models are often constructed taking into account multiple objectives, e. g., minimizing inference time while also maximizing accuracy.
1 code implementation • 24 Oct 2022 • Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti
While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.
1 code implementation • 14 Oct 2022 • Daniel Schalk, Bernd Bischl, David Rügamer
In this paper, we propose an algorithm for a distributed, privacy-preserving, and lossless estimation of generalized additive mixed models (GAMM) using component-wise gradient boosting (CWB).
1 code implementation • 14 Sep 2022 • Emilio Dorigatti, Bernd Bischl, Benjamin Schubert
Accurate in silico modeling of the antigen processing pathway is crucial to enable personalized epitope vaccine design for cancer.
no code implementations • 14 Sep 2022 • Shunjie-Fabian Zheng, JaeEun Nam, Emilio Dorigatti, Bernd Bischl, Shekoofeh Azizi, Mina Rezaei
However, existing methods for joint clustering and contrastive learning do not perform well on long-tailed data distributions, as majority classes overwhelm and distort the loss of minority classes, thus preventing meaningful representations to be learned.
1 code implementation • 6 Sep 2022 • Emilio Dorigatti, Jonas Schweisthal, Bernd Bischl, Mina Rezaei
Learning from positive and unlabeled (PU) data is a setting where the learner only has access to positive and unlabeled samples while having no information on negative examples.
no code implementations • 1 Aug 2022 • Felix Ott, Nisha Lakshmana Raichur, David Rügamer, Tobias Feigl, Heiko Neumann, Bernd Bischl, Christopher Mutschler
We show accuracy improvements for the APR-RPR task and for the RPR-RPR task for aerial vehicles and hand-held devices.
1 code implementation • 30 Jul 2022 • Lennart Schneider, Florian Pfisterer, Paul Kent, Juergen Branke, Bernd Bischl, Janek Thomas
Although considerable progress has been made in the field of multi-objective NAS, we argue that there is some discrepancy between the actual optimization problem of practical interest and the optimization problem that multi-objective NAS tries to solve.
1 code implementation • 30 Jul 2022 • Lennart Schneider, Lennart Schäpermeier, Raphael Patrick Prager, Bernd Bischl, Heike Trautmann, Pascal Kerschke
We identify a subset of BBOB problems that are close to the HPO problems in ELA feature space and show that optimizer performance is comparably similar on these two sets of benchmark problems.
2 code implementations • 25 Jul 2022 • Pieter Gijsbers, Marcos L. P. Bueno, Stefan Coors, Erin LeDell, Sébastien Poirier, Janek Thomas, Bernd Bischl, Joaquin Vanschoren
Comparing different AutoML frameworks is notoriously challenging and often done incorrectly.
no code implementations • 17 Jun 2022 • Andreas Klaß, Sven M. Lorenz, Martin W. Lauer-Schmaltz, David Rügamer, Bernd Bischl, Christopher Mutschler, Felix Ott
For many applications, analyzing the uncertainty of a machine learning model is indispensable.
no code implementations • 15 Jun 2022 • Florian Karl, Tobias Pielok, Julia Moosbauer, Florian Pfisterer, Stefan Coors, Martin Binder, Lennart Schneider, Janek Thomas, Jakob Richter, Michel Lang, Eduardo C. Garrido-Merchán, Juergen Branke, Bernd Bischl
Hyperparameter optimization constitutes a large part of typical modern machine learning workflows.
1 code implementation • 11 Jun 2022 • Julia Moosbauer, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves.
1 code implementation • 31 May 2022 • Mehmet Ozgur Turkoglu, Alexander Becker, Hüseyin Anil Gündüz, Mina Rezaei, Bernd Bischl, Rodrigo Caye Daudt, Stefano D'Aronco, Jan Dirk Wegner, Konrad Schindler
We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison.
no code implementations • 25 May 2022 • David Rügamer, Andreas Bender, Simon Wiegrebe, Daniel Racek, Bernd Bischl, Christian L. Müller, Clemens Stachl
Here, we propose Factorized Structured Regression (FaStR) for scalable varying coefficient models.
no code implementations • 19 May 2022 • Ludwig Bothmann, Kristina Peters, Bernd Bischl
A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those metrics.
1 code implementation • 11 May 2022 • Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer
In contrast to common NAS search spaces, we designed a novel neural architecture search space covering various state-of-the-art architectures, allowing for an efficient macro-search over different DL approaches.
1 code implementation • 28 Apr 2022 • Lennart Schneider, Florian Pfisterer, Janek Thomas, Bernd Bischl
The goal of Quality Diversity Optimization is to generate a collection of diverse yet high-performing solutions to a given problem at hand.
1 code implementation • 7 Apr 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
To mitigate this domain shift problem, domain adaptation (DA) techniques search for an optimal transformation that converts the (current) input data from a source domain to a target domain to learn a domain-invariant representation that reduces domain discrepancy.
no code implementations • 4 Apr 2022 • Ashkan Khakzar, Yawei Li, Yang Zhang, Mirac Sanisoglu, Seong Tae Kim, Mina Rezaei, Bernd Bischl, Nassir Navab
One challenging property lurking in medical datasets is the imbalanced data distribution, where the frequency of the samples between the different classes is not balanced.
no code implementations • 16 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
We perform extensive evaluations on synthetic image and time-series data, and on data for offline handwriting recognition (HWR) and on online HWR from sensor-enhanced pens for classifying written words.
1 code implementation • 15 Feb 2022 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
Machine learning models can automatically learn complex relationships, such as non-linear and interaction effects.
no code implementations • 14 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Tim Hamann, Jens Barth, Bernd Bischl, Christopher Mutschler
While there exist many offline HWR datasets, there is only little data available for the development of OnHWR methods on paper as it requires hardware-integrated pens.
no code implementations • 12 Feb 2022 • Philipp Kopper, Simon Wiegrebe, Bernd Bischl, Andreas Bender, David Rügamer
Survival analysis (SA) is an active field of research that is concerned with time-to-event outcomes and is prevalent in many domains, particularly biomedical applications.
no code implementations • 31 Jan 2022 • Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl
In this work, we thus propose to tackle the issues of imbalanced datasets and model calibration in a PUL setting through an uncertainty-aware pseudo-labeling procedure (PUUPL): by boosting the signal from the minority class, pseudo-labeling expands the labeled dataset with new samples from the unlabeled set, while explicit uncertainty quantification prevents the emergence of harmful confirmation bias leading to increased predictive performance.
no code implementations • 21 Jan 2022 • Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann
Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.
1 code implementation • 29 Nov 2021 • Julia Moosbauer, Martin Binder, Lennart Schneider, Florian Pfisterer, Marc Becker, Michel Lang, Lars Kotthoff, Bernd Bischl
Automated hyperparameter optimization (HPO) has gained great popularity and is an important ingredient of most automated machine learning frameworks.
1 code implementation • NeurIPS 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
no code implementations • 21 Oct 2021 • Tobias Weber, Michael Ingrisch, Matthias Fabritius, Bernd Bischl, David Rügamer
We propose a hazard-regularized variational autoencoder that supports straightforward interpretation of deep neural architectures in the context of survival analysis, a field highly relevant in healthcare.
no code implementations • 21 Oct 2021 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
The application of deep learning in survival analysis (SA) allows utilizing unstructured and high-dimensional data types uncommon in traditional survival methods.
no code implementations • 7 Oct 2021 • Daniel Schalk, Bernd Bischl, David Rügamer
Componentwise boosting (CWB), also known as model-based boosting, is a variant of gradient boosting that builds on additive models as base learners to ensure interpretability.
no code implementations • 29 Sep 2021 • Hüseyin Anil Gündüz, Martin Binder, Xiao-Yin To, René Mreches, Philipp C. Münch, Alice C McHardy, Bernd Bischl, Mina Rezaei
We introduce Self-GenomeNet, a novel contrastive self-supervised learning method for nucleotide-level genomic data, which substantially improves the quality of the learned representations and performance compared to the current state-of-the-art deep learning frameworks.
1 code implementation • 22 Sep 2021 • Farzin Soleymani, Mohammad Eslami, Tobias Elze, Bernd Bischl, Mina Rezaei
We propose a Deep Variational Clustering (DVC) framework for unsupervised representation learning and clustering of large-scale medical images.
no code implementations • 15 Sep 2021 • Mina Rezaei, Farzin Soleymani, Bernd Bischl, Shekoofeh Azizi
In this paper, we propose deep Bregman divergences for contrastive learning of visual representation where we aim to enhance contrastive loss used in self-supervised learning by training additional networks based on functional Bregman divergence.
no code implementations • 12 Sep 2021 • Stefan Coors, Daniel Schalk, Bernd Bischl, David Rügamer
Despite its restriction to an interpretable model space, our system is competitive in terms of predictive performance on most data sets while being more user-friendly and transparent.
no code implementations • 11 Sep 2021 • Mina Rezaei, Emilio Dorigatti, David Ruegamer, Bernd Bischl
We simultaneously train two deep learning models, a deep representation network that captures the data distribution, and a deep clustering network that learns embedded features and performs clustering.
1 code implementation • 8 Sep 2021 • Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder, Bernd Bischl
When developing and analyzing new hyperparameter optimization methods, it is vital to empirically evaluate and compare them on well-curated benchmark suites.
no code implementations • 3 Sep 2021 • Christoph Molnar, Timo Freiesleben, Gunnar König, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl
Scientists and practitioners increasingly rely on machine learning to model data and draw conclusions.
no code implementations • 28 Jul 2021 • Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl
It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).
no code implementations • 13 Jul 2021 • Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, Marius Lindauer
Most machine learning algorithms are configured by one or several hyperparameters that must be carefully chosen and often considerably impact performance.
no code implementations • ICML Workshop AutoML 2021 • Lennart Schneider, Florian Pfisterer, Martin Binder, Bernd Bischl
Neural architecture search (NAS) promises to make deep learning accessible to non-experts by automating architecture engineering of deep neural networks.
1 code implementation • 15 Jun 2021 • Gunnar König, Timo Freiesleben, Bernd Bischl, Giuseppe Casalicchio, Moritz Grosse-Wentrup
Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables.
1 code implementation • 10 Jun 2021 • Pieter Gijsbers, Florian Pfisterer, Jan N. van Rijn, Bernd Bischl, Joaquin Vanschoren
Hyperparameter optimization in machine learning (ML) deals with the problem of empirically learning an optimal algorithm configuration from data, usually formulated as a black-box optimization problem.
no code implementations • ICML Workshop AutoML 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
1 code implementation • 23 Apr 2021 • Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio
However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups.
2 code implementations • 6 Apr 2021 • David Rügamer, Chris Kolb, Cornelius Fritz, Florian Pfisterer, Philipp Kopper, Bernd Bischl, Ruolin Shen, Christina Bukas, Lisa Barros de Andrade e Sousa, Dominik Thalmeier, Philipp Baumann, Lucas Kook, Nadja Klein, Christian L. Müller
In this paper we describe the implementation of semi-structured deep distributional regression, a flexible framework to learn conditional distributions based on the combination of additive regression models and deep networks.
2 code implementations • 1 Apr 2021 • Florian Pargent, Florian Pfisterer, Janek Thomas, Bernd Bischl
Since most machine learning (ML) algorithms are designed for numerical inputs, efficiently encoding categorical variables is a crucial aspect in data analysis.
1 code implementation • 6 Feb 2021 • Jann Goschenhofer, Rasmus Hvingelby, David Rügamer, Janek Thomas, Moritz Wagner, Bernd Bischl
Based on these adaptations, we explore the potential of deep semi-supervised learning in the context of time series classification by evaluating our methods on large public time series classification problems with varying amounts of labelled samples.
no code implementations • 11 Nov 2020 • Philipp Kopper, Sebastian Pölsterl, Christian Wachinger, Bernd Bischl, Andreas Bender, David Rügamer
We propose a versatile framework for survival analysis that combines advanced concepts from statistics with deep learning.
no code implementations • 4 Nov 2020 • Ashrya Agrawal, Florian Pfisterer, Bernd Bischl, Francois Buet-Golfouse, Srijan Sood, Jiahao Chen, Sameena Shah, Sebastian Vollmer
We present an empirical study of debiasing methods for classifiers, showing that debiasers often fail in practice to generalize out-of-sample, and can in fact make fairness worse rather than better.
no code implementations • 19 Oct 2020 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.
no code implementations • 14 Oct 2020 • David Rügamer, Florian Pfisterer, Bernd Bischl
We present neural mixture distributional regression (NMDR), a holistic framework to estimate complex finite mixtures of distributional regressions defined by flexible additive predictors.
no code implementations • 11 Sep 2020 • Katharina Rath, Christopher G. Albert, Bernd Bischl, Udo von Toussaint
In the limit of small mapping times, the Hamiltonian function can be identified with a part of the generating function and thereby learned from observed time-series data of the system's evolution.
no code implementations • 18 Aug 2020 • Raphael Sonabend, Franz J. Király, Andreas Bender, Bernd Bischl, Michel Lang
As machine learning has become increasingly popular over the last few decades, so too has the number of machine learning interfaces for implementing these models.
2 code implementations • 16 Jul 2020 • Gunnar König, Christoph Molnar, Bernd Bischl, Moritz Grosse-Wentrup
Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.
1 code implementation • 27 Jun 2020 • Andreas Bender, David Rügamer, Fabian Scheipl, Bernd Bischl
The modeling of time-to-event data, also known as survival analysis, requires specialized methods that can deal with censoring and truncation, time-varying features and effects, and that extend to settings with multiple competing events.
1 code implementation • 8 Jun 2020 • Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio
In addition, we apply the conditional subgroups approach to partial dependence plots (PDP), a popular method for describing feature effects that can also suffer from extrapolation when features are dependent and interactions are present in the model.
1 code implementation • 23 Apr 2020 • Susanne Dandl, Christoph Molnar, Martin Binder, Bernd Bischl
We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.
no code implementations • 30 Dec 2019 • Martin Binder, Julia Moosbauer, Janek Thomas, Bernd Bischl
While model-based optimization needs fewer objective evaluations to achieve good performance, it incurs computational overhead compared to the NSGA-II, so the preferred choice depends on the cost of evaluating a model on given data.
1 code implementation • 18 Nov 2019 • Florian Pfisterer, Laura Beggel, Xudong Sun, Fabian Scheipl, Bernd Bischl
In order to assess the methods and implementations, we run a benchmark on a wide variety of representative (time series) data sets, with in-depth analysis of empirical results, and strive to provide a reference ranking for which method(s) to use for non-expert practitioners.
no code implementations • 6 Nov 2019 • Florian Pfisterer, Janek Thomas, Bernd Bischl
Building models from data is an integral part of the majority of data science workflows.
no code implementations • 28 Aug 2019 • Florian Pfisterer, Stefan Coors, Janek Thomas, Bernd Bischl
AutoML systems are currently rising in popularity, as they can build powerful models without human oversight.
no code implementations • 25 Aug 2019 • Xudong Sun, Bernd Bischl
Aiming at a comprehensive and concise tutorial survey, recap of variational inference and reinforcement learning with Probabilistic Graphical Models are given with detailed derivations.
no code implementations • 1 Jul 2019 • Pieter Gijsbers, Erin LeDell, Janek Thomas, Sébastien Poirier, Bernd Bischl, Joaquin Vanschoren
In recent years, an active field of research has developed around automated machine learning (AutoML).
1 code implementation • 7 Jun 2019 • Xudong Sun, Alexej Gossmann, Yu Wang, Bernd Bischl
A novel variational inference based resampling framework is proposed to evaluate the robustness and generalization capability of deep learning models with respect to distribution shift.
no code implementations • 24 Apr 2019 • Jann Goschenhofer, Franz MJ Pfister, Kamer Ali Yuksel, Bernd Bischl, Urban Fietzek, Janek Thomas
To solve the problem of limited availability of high quality training data, we propose a transfer learning technique which helps to improve model performance substantially.
1 code implementation • 10 Apr 2019 • Xudong Sun, Jiali Lin, Bernd Bischl
Machine learning pipeline potentially consists of several stages of operations like data preprocessing, feature engineering and machine learning model training.
2 code implementations • 8 Apr 2019 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models.
no code implementations • 8 Apr 2019 • Quay Au, Daniel Schalk, Giuseppe Casalicchio, Ramona Schoedel, Clemens Stachl, Bernd Bischl
One way to address this problem is the so called problem transformation method.
2 code implementations • 8 Apr 2019 • Christian A. Scholbeck, Christoph Molnar, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.
1 code implementation • 24 Feb 2019 • Xudong Sun, Andrea Bommert, Florian Pfisterer, Jörg Rahnenführer, Michel Lang, Bernd Bischl
To carry out a clinical research under this scenario, an analyst could train a machine learning model only on local data site, but it is still possible to execute a statistical query at a certain cost in the form of sending a machine learning model to some of the remote data sites and get the performance measures as feedback, maybe due to prediction being usually much cheaper.
no code implementations • 18 Jan 2019 • Laura Beggel, Michael Pfeiffer, Bernd Bischl
Reliably detecting anomalies in a given set of images is a task of high practical relevance for visual quality inspection, surveillance, or medical image analysis.
no code implementations • 23 Nov 2018 • Florian Pfisterer, Jan N. van Rijn, Philipp Probst, Andreas Müller, Bernd Bischl
The performance of modern machine learning methods highly depends on their hyperparameter configurations.
3 code implementations • 10 Jul 2018 • Janek Thomas, Stefan Coors, Bernd Bischl
Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference.
no code implementations • 28 Jun 2018 • Daniel Kühn, Philipp Probst, Janek Thomas, Bernd Bischl
Understanding the influence of hyperparameters on the performance of a machine learning algorithm is an important scientific topic in itself and can help to improve automatic hyperparameter tuning procedures.
1 code implementation • 18 Apr 2018 • Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl
Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.
2 code implementations • 26 Feb 2018 • Philipp Probst, Bernd Bischl, Anne-Laure Boulesteix
Firstly, we formalize the problem of tuning from a statistical point of view, define data-based defaults and suggest general measures quantifying the tunability of hyperparameters of algorithms.
4 code implementations • 11 Aug 2017 • Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Pieter Gijsbers, Frank Hutter, Michel Lang, Rafael G. Mantovani, Jan N. van Rijn, Joaquin Vanschoren
Machine learning research depends on objectively interpretable, comparable, and reproducible algorithm benchmarks.
1 code implementation • 27 Mar 2017 • Philipp Probst, Quay Au, Giuseppe Casalicchio, Clemens Stachl, Bernd Bischl
We implemented several multilabel classification algorithms in the machine learning package mlr.
4 code implementations • 9 Mar 2017 • Bernd Bischl, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, Michel Lang
We present mlrMBO, a flexible and comprehensive R toolbox for model-based optimization (MBO), also known as Bayesian optimization, which addresses the problem of expensive black-box optimization by approximating the given objective function through a surrogate regression model.
no code implementations • 15 Feb 2017 • Janek Thomas, Tobias Hepp, Andreas Mayr, Bernd Bischl
We present a new variable selection method based on model-based gradient boosting and randomly permuted variables.
1 code implementation • 5 Jan 2017 • Giuseppe Casalicchio, Jakob Bossek, Michel Lang, Dominik Kirchhoff, Pascal Kerschke, Benjamin Hofner, Heidi Seibold, Joaquin Vanschoren, Bernd Bischl
We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks.
1 code implementation • 30 Nov 2016 • Janek Thomas, Andreas Mayr, Bernd Bischl, Matthias Schmid, Adam Smith, Benjamin Hofner
We apply this new algorithm to a study to estimate abundance of common eider in Massachusetts, USA, featuring excess zeros, overdispersion, non-linearity and spatio-temporal structures.
no code implementations • 18 Sep 2016 • Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M. Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff, Tobias Kühn, Janek Thomas, Lars Kotthoff
This document provides and in-depth introduction to the mlr framework for machine learning experiments in R.
no code implementations • 10 Feb 2016 • Aydin Demircioglu, Daniel Horn, Tobias Glasmachers, Bernd Bischl, Claus Weihs
Kernelized Support Vector Machines (SVMs) are among the best performing supervised learning methods.
2 code implementations • 8 Jun 2015 • Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Frechette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, Joaquin Vanschoren
To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature.
1 code implementation • 29 Jul 2014 • Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, Luis Torgo
Many sciences have made significant breakthroughs by adopting online tools that help organize, structure and mine information that is too detailed to be printed in journals.