no code implementations • 31 Jan 2024 • Rafael Blanquero, Emilio Carrizosa, Pepa Ramírez-Cobo, M. Remedios Sillero-Denamiel
The Lasso has become a benchmark data analysis procedure, and numerous variants have been proposed in the literature.
no code implementations • 31 Jan 2024 • Rafael Blanquero, Emilio Carrizosa, Pepa Ramírez-Cobo, M. Remedios Sillero-Denamiel
However, features are usually correlated, a fact that violates the Na\"ive Bayes' assumption of conditional independence, and may deteriorate the method's performance.
no code implementations • 25 Jan 2024 • Pepa Ramírez-Cobo, Emilio Carrizosa, Rosa Elvira Lillo
Given a real operational risk database, the aggregate loss model is estimated by fitting separately the inter-losses times and severities.
no code implementations • 15 Jan 2024 • Sandra Benítez-Peña, Rafael Blanquero, Emilio Carrizosa, Pepa Ramírez-Cobo
The relevance of features in a classification procedure is linked to the fact that misclassifications costs are frequently asymmetric, since false positive and false negative cases may have very different consequences.
no code implementations • 22 Dec 2023 • Sandra Benítez-Peña, Rafael Blanquero, Emilio Carrizosa, Pepa Ramírez-Cobo
Such maximal margin hyperplane is obtained by solving a quadratic convex problem with linear constraints and integer variables.
1 code implementation • 19 Oct 2023 • Emilio Carrizosa, Jasone Ramírez-Ayerbe, Dolores Romero Morales
By means of novel Mathematical Optimization models, we provide a counterfactual explanation for each instance in a group of interest, so that the total cost of the perturbations is minimized under some linking constraints.
no code implementations • 9 Oct 2023 • Sandra Benítez-Peña, Rafael Blanquero, Emilio Carrizosa, Pepa Ramírez-Cobo
Classification in SVM is based on a score procedure, yielding a deterministic classification rule, which can be transformed into a probabilistic rule (as implemented in off-the-shelf SVM libraries), but is not probabilistic in nature.
no code implementations • 19 Oct 2021 • Emilio Carrizosa, Marcela Galvis Restrepo, Dolores Romero Morales
We propose a method to reduce the complexity of Generalized Linear Models in the presence of categorical predictors.
no code implementations • 19 Oct 2021 • Rafael Blanquero, Emilio Carrizosa, Cristina Molero-Río, Dolores Romero Morales
Our classifier can be seen as a randomized tree, since at each node of the decision tree a random decision is made.
no code implementations • 21 Feb 2020 • Rafael Blanquero, Emilio Carrizosa, Cristina Molero-Río, Dolores Romero Morales
In this paper, we propose a continuous optimization approach to build sparse optimal classification trees, based on oblique cuts, with the aim of using fewer predictor variables in the cuts as well as along the whole tree.