Adaptive Similarity Embedding for Unsupervised Multi-View Feature Selection

IEEE Xplore 2020  ·  Yuan Wan, Shengzi Sun, Cheng Zeng ·

Multi-view learning has become a significant research topic in image processing, data mining, and machine learning due to the proliferation of multi-view data. Considering the difficulty in obtaining labeled data in many real applications, we focus on the multi-view unsupervised feature selection problem. Most existing multi-view feature selections introduce an identical similarity matrix among different views, which cannot preserve the specific correlation between every single view. Also, some of these methods just consider either global or local structures. In this paper, we propose an embedding method, Adaptive Similarity Embedding for Unsupervised Multi-View Feature Selection (ASE-UMFS). This method reduces the high-dimensional data to the low dimensions and unifies different views to a combined weight matrix. We also use parameters to constraint the similarity matrix for the local structure, where the regularization term is used to add a prior of uniform distribution; taking into account the independence in the projection matrix among different views, optimization of the similarity matrix is further improved. To confirm the effectiveness of ASE-UMFS, comparisons are made with benchmark algorithms on real-world data sets. The experimental results demonstrate that the proposed algorithm outperforms several state-of-the-art methods in multi-view learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods