Search Results for author: Fabio Sigrist

Found 8 papers, 7 papers with code

Iterative Methods for Vecchia-Laplace Approximations for Latent Gaussian Process Models

1 code implementation18 Oct 2023 Pascal Kündig, Fabio Sigrist

Latent Gaussian process (GP) models are flexible probabilistic non-parametric function models.

A Comparison of Machine Learning Methods for Data with High-Cardinality Categorical Variables

1 code implementation5 Jul 2023 Fabio Sigrist

High-cardinality categorical variables are variables for which the number of different levels is large relative to the sample size of a data set, or in other words, there are few data points per level.

Latent Gaussian Model Boosting

1 code implementation19 May 2021 Fabio Sigrist

Latent Gaussian models and boosting are widely used techniques in statistics and machine learning.

Joint Variable Selection of both Fixed and Random Effects for Gaussian Process-based Spatially Varying Coefficient Models

no code implementations6 Jan 2021 Jakob A. Dambon, Fabio Sigrist, Reinhard Furrer

It relies on a penalized maximum likelihood estimation (PMLE) and allows variable selection both with respect to fixed effects and Gaussian process random effects.

Variable Selection Methodology

Gaussian Process Boosting

1 code implementation6 Apr 2020 Fabio Sigrist

We introduce a novel way to combine boosting with Gaussian process and mixed effects models.

KTBoost: Combined Kernel and Tree Boosting

1 code implementation11 Feb 2019 Fabio Sigrist

We introduce a novel boosting algorithm called `KTBoost' which combines kernel boosting and tree boosting.

regression

Gradient and Newton Boosting for Classification and Regression

2 code implementations9 Aug 2018 Fabio Sigrist

In addition, we introduce a novel tuning parameter for tree-based Newton boosting which is interpretable and important for predictive accuracy.

Classification General Classification +1

Grabit: Gradient Tree Boosted Tobit Models for Default Prediction

2 code implementations23 Nov 2017 Fabio Sigrist, Christoph Hirnschall

A frequent problem in binary classification is class imbalance between a minority and a majority class such as defaults and non-defaults in default prediction.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.