Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests. Previous work, in particular, has focused on representing internal components of neural networks through human-friendly visuals and concepts... (read more)
PDFMETHOD | TYPE | |
---|---|---|
![]() |
Working Memory Models |