no code implementations • 23 Jun 2023 • Daniel Lundstrom, Meisam Razaviyayn
Deep neural networks have produced significant progress among machine learning models in terms of accuracy and functionality, but their inner workings are still largely unknown.
no code implementations • 4 May 2023 • Daniel Lundstrom, Meisam Razaviyayn
We show that, given modest assumptions, a unique full account of interactions between features, called synergies, is possible in the continuous input setting.
1 code implementation • 24 Feb 2022 • Daniel Lundstrom, Tianjian Huang, Meisam Razaviyayn
Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction.
no code implementations • 15 Jan 2022 • Daniel Lundstrom, Alexander Huyen, Arya Mevada, Kyongsik Yun, Thomas Lu
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.