no code implementations • 25 Jan 2022 • Jian-wei Liu, Zheng-ping Ren, Run-kun Lu, Xiong-lin Luo
Real world datasets often contain noisy labels, and learning from such datasets using standard classification approaches may not produce the desired performance.
no code implementations • 19 Jan 2022 • Si-si Zhang, Jian-wei Liu, Xin Zuo, Run-kun Lu, Si-ming Lian
And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes.
no code implementations • 15 Jan 2022 • Run-kun Lu, Jian-wei Liu, Si-ming Lian, Xin Zuo
By this way, the original multi-task multi-view data has degenerated into multi-task data, and exploring the correlations among multiple tasks enables to make an improvement on the performance of learning algorithm.
no code implementations • 9 Jan 2022 • Run-kun Lu, Jian-wei Liu, Yuan-Fang Wang, Hao-jie Xie, Xin Zuo
As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.
no code implementations • 8 Jan 2022 • Jian-wei Liu, Yuan-Fang Wang, Run-kun Lu, Xionglin Luo
But not all of this information is useful for classification tasks.
no code implementations • 4 Jan 2022 • Run-kun Lu, Jian-wei Liu, Ze-Yu Liu, Jin-zhong Chen
To this end, a two stage fusion strategy is proposed to embed representation learning into the process of multi-view subspace clustering.
no code implementations • 1 Jan 2022 • Jian-wei Liu, Hao-jie Xie, Run-kun Lu, Xiong-lin Luo
Multi-view learning can cover all features of data samples more comprehensively, so multi-view learning has attracted widespread attention.
no code implementations • 1 Jan 2022 • Jian-wei Liu, Xi-hao Ding, Run-kun Lu, Xionglin Luo
Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data.
no code implementations • 23 Dec 2021 • Run-kun Lu, Jian-wei Liu, Xin Zuo
In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by considering each view's dynamic contribution obtained by attention mechanism.