Search Results for author: Francesca Randone

Found 2 papers, 2 papers with code

Model Learning with Personalized Interpretability Estimation (ML-PIE)

1 code implementation13 Apr 2021 Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, Mattias Wahde

The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning.

Active Learning

Learning a Formula of Interpretability to Learn Interpretable Formulas

3 code implementations23 Apr 2020 Marco Virgolin, Andrea De Lorenzo, Eric Medvet, Francesca Randone

We show that it is instead possible to take a meta-learning approach: an ML model of non-trivial Proxies of Human Interpretability (PHIs) can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability.

Meta-Learning regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.