Feature Importance
249 papers with code • 6 benchmarks • 5 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsLatest papers with no code
Using a Local Surrogate Model to Interpret Temporal Shifts in Global Annual Data
This paper focuses on explaining changes over time in globally-sourced, annual temporal data, with the specific objective of identifying pivotal factors that contribute to these temporal shifts.
CAGE: Causality-Aware Shapley Value for Global Explanations
One way to explain AI models is to elucidate the predictive importance of the input features for the AI model in general, also referred to as global explanations.
Explainable Machine Learning System for Predicting Chronic Kidney Disease in High-Risk Cardiovascular Patients
After conducting a bias inspection, it was found that the initial eGFR values and CKD predictions exhibited some bias, but no significant gender bias was identified.
Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem by making the decision-making process of deep learning models more interpretable and transparent.
Machine learning-based identification of Gaia astrometric exoplanet orbits
The third Gaia data release (DR3) contains $\sim$170 000 astrometric orbit solutions of two-body systems located within $\sim$500 pc of the Sun.
SemHARQ: Semantic-Aware HARQ for Multi-task Semantic Communications
Intelligent task-oriented semantic communications (SemComs) have witnessed great progress with the development of deep learning (DL).
Accurate estimation of feature importance faithfulness for tree models
In this paper, we consider a perturbation-based metric of predictive faithfulness of feature rankings (or attributions) that we call PGI squared.
Fair MP-BOOST: Fair and Interpretable Minipatch Boosting
Ensemble methods, particularly boosting, have established themselves as highly effective and widely embraced machine learning techniques for tabular data.
Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences
Using SHAP feature importance, we show that analytical insights are consistent across retraining iterations.
Semi-Supervised Graph Representation Learning with Human-centric Explanation for Predicting Fatty Liver Disease
Addressing the challenge of limited labeled data in clinical settings, particularly in the prediction of fatty liver disease, this study explores the potential of graph representation learning within a semi-supervised learning framework.