Feature Importance
246 papers with code • 6 benchmarks • 5 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsLatest papers with no code
Capturing Momentum: Tennis Match Analysis Using Machine Learning and Time Series Theory
This paper represents an analysis on the momentum of tennis match.
Explainable AI for Fair Sepsis Mortality Predictive Model
By focusing on the predictive modeling of sepsis-related mortality, we propose a method that learns a performance-optimized predictive model and then employs the transfer learning process to produce a model with better fairness.
Using a Local Surrogate Model to Interpret Temporal Shifts in Global Annual Data
This paper focuses on explaining changes over time in globally-sourced, annual temporal data, with the specific objective of identifying pivotal factors that contribute to these temporal shifts.
CAGE: Causality-Aware Shapley Value for Global Explanations
One way to explain AI models is to elucidate the predictive importance of the input features for the AI model in general, also referred to as global explanations.
Explainable Machine Learning System for Predicting Chronic Kidney Disease in High-Risk Cardiovascular Patients
After conducting a bias inspection, it was found that the initial eGFR values and CKD predictions exhibited some bias, but no significant gender bias was identified.
Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem by making the decision-making process of deep learning models more interpretable and transparent.
Machine learning-based identification of Gaia astrometric exoplanet orbits
The third Gaia data release (DR3) contains $\sim$170 000 astrometric orbit solutions of two-body systems located within $\sim$500 pc of the Sun.
SemHARQ: Semantic-Aware HARQ for Multi-task Semantic Communications
Intelligent task-oriented semantic communications (SemComs) have witnessed great progress with the development of deep learning (DL).
Accurate estimation of feature importance faithfulness for tree models
In this paper, we consider a perturbation-based metric of predictive faithfulness of feature rankings (or attributions) that we call PGI squared.
Fair MP-BOOST: Fair and Interpretable Minipatch Boosting
Ensemble methods, particularly boosting, have established themselves as highly effective and widely embraced machine learning techniques for tabular data.