Deep Multimodal Subspace Clustering Networks

17 Apr 2018  Â·  Mahdi Abavisani, Vishal M. Patel ·

We present convolutional neural network (CNN) based approaches for unsupervised multimodal subspace clustering. The proposed framework consists of three main stages - multimodal encoder, self-expressive layer, and multimodal decoder. The encoder takes multimodal data as input and fuses them to a latent space representation. The self-expressive layer is responsible for enforcing the self-expressiveness property and acquiring an affinity matrix corresponding to the data points. The decoder reconstructs the original input data. The network uses the distance between the decoder's reconstruction and the original input in its training. We investigate early, late and intermediate fusion techniques and propose three different encoders corresponding to them for spatial fusion. The self-expressive layers and multimodal decoders are essentially the same for different spatial fusion-based approaches. In addition to various spatial fusion-based methods, an affinity fusion-based network is also proposed in which the self-expressive layer corresponding to different modalities is enforced to be the same. Extensive experiments on three datasets show that the proposed methods significantly outperform the state-of-the-art multimodal subspace clustering methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multi-view Subspace Clustering ARL Polarimetric Thermal Face Dataset DMSC Accuracy 0.988 # 1
Image Clustering ARL Polarimetric Thermal Face Dataset DMSC Accuracy 0.983 # 1
Image Clustering Extended Yale-B DMSC Accuracy 0.992 # 1
NMI 0.988 # 1
Multi-view Subspace Clustering ORL DMSC Accuracy 0.833 # 2
Image Clustering USPS DMSC NMI 0.929 # 7
Accuracy 0.951 # 10

Methods


No methods listed for this paper. Add relevant methods here