Ontology-Driven Semantic Alignment of Artificial Neurons and Visual Concepts

29 Sep 2021  ·  Riccardo Massidda, Davide Bacciu ·

Semantic alignment methods attempt to establish a link between human-level concepts and the units of an artificial neural network. Current approaches evaluate the emergence of such meaningful neurons by analyzing the effect of semantically annotated inputs on their activations. In doing so, they often understate two aspects that characterize neural representations and semantic concepts, namely the distributed nature of the former and the existence of semantic relationships binding the latter. In this work, we explicitly tackle this interrelatedness, both at a neural and a conceptual level, by providing a novel semantic alignment framework that builds on aligning a structured ontology with the distributed neural representations. The ontology introduces semantic relations between concepts, enabling the clustering of topologically related units into semantically rich and meaningful neural circuits. Our empirical analysis on notable convolutional models for image classification discusses the emergence of such neural circuits. It also validates their meaningfulness by studying how the selected units are pivotal for the accuracy of classes that are semantically related to the aligned concepts. We also contribute by releasing the code implementing our alignment framework.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here