Interpretability

General • 16 methods

Interpretability Methods seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.

Method Year Papers
2017 326
2016 276
2015 206
2015 147
2023 39
2020 12
2017 11
2019 2
2022 2
2018 1
2019 1
2020 1
1
2021 1
2022 1
2023 1