A Lightweight Randomized Nonlinear Dictionary Learning Method using Random Vector Functional Link

6 Feb 2024  ·  G. Madhuri, Atul Negi ·

Kernel-based nonlinear dictionary learning methods operate in a feature space obtained by an implicit feature map, and they are not independent of computationally expensive operations like Singular Value Decomposition (SVD). This paper presents an SVD-free lightweight approach to learning a nonlinear dictionary using a randomized functional link called a Random Vector Functional Link (RVFL). The proposed RVFL-based nonlinear Dictionary Learning (RVFLDL) learns a dictionary as a sparse-to-dense feature map from nonlinear sparse coefficients to the dense input features. Sparse coefficients w.r.t an initial random dictionary are derived by assuming Horseshoe prior are used as inputs making it a lightweight network. Training the RVFL-based dictionary is free from SVD computation as RVFL generates weights from the input to the output layer analytically. Higher-order dependencies between the input sparse coefficients and the dictionary atoms are incorporated into the training process by nonlinearly transforming the sparse coefficients and adding them as enhanced features. Thus the method projects sparse coefficients to a higher dimensional space while inducing nonlinearities into the dictionary. For classification using RVFL-net, a classifier matrix is learned as a transform that maps nonlinear sparse coefficients to the labels. The empirical evidence of the method illustrated in image classification and reconstruction applications shows that RVFLDL is scalable and provides a solution better than those obtained using other nonlinear dictionary learning methods.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here