Search Results for author: Run-kun Lu

Found 9 papers, 0 papers with code

GMM Discriminant Analysis with Noisy Label for Each Class

no code implementations25 Jan 2022 Jian-wei Liu, Zheng-ping Ren, Run-kun Lu, Xiong-lin Luo

Real world datasets often contain noisy labels, and learning from such datasets using standard classification approaches may not produce the desired performance.

Online Deep Learning based on Auto-Encoder

no code implementations19 Jan 2022 Si-si Zhang, Jian-wei Liu, Xin Zuo, Run-kun Lu, Si-ming Lian

And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes.

Denoising

Multi-View representation learning in Multi-Task Scene

no code implementations15 Jan 2022 Run-kun Lu, Jian-wei Liu, Si-ming Lian, Xin Zuo

By this way, the original multi-task multi-view data has degenerated into multi-task data, and exploring the correlations among multiple tasks enables to make an improvement on the performance of learning algorithm.

Multi-Task Learning MULTI-VIEW LEARNING +1

Auto-Encoder based Co-Training Multi-View Representation Learning

no code implementations9 Jan 2022 Run-kun Lu, Jian-wei Liu, Yuan-Fang Wang, Hao-jie Xie, Xin Zuo

As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.

MULTI-VIEW LEARNING Representation Learning

Partially latent factors based multi-view subspace learning

no code implementations4 Jan 2022 Run-kun Lu, Jian-wei Liu, Ze-Yu Liu, Jin-zhong Chen

To this end, a two stage fusion strategy is proposed to embed representation learning into the process of multi-view subspace clustering.

Clustering Multi-view Subspace Clustering +1

Multi-view Subspace Adaptive Learning via Autoencoder and Attention

no code implementations1 Jan 2022 Jian-wei Liu, Hao-jie Xie, Run-kun Lu, Xiong-lin Luo

Multi-view learning can cover all features of data samples more comprehensively, so multi-view learning has attracted widespread attention.

Clustering MULTI-VIEW LEARNING

Self-attention Multi-view Representation Learning with Diversity-promoting Complementarity

no code implementations1 Jan 2022 Jian-wei Liu, Xi-hao Ding, Run-kun Lu, Xionglin Luo

Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data.

MULTI-VIEW LEARNING Representation Learning

Attentive Multi-View Deep Subspace Clustering Net

no code implementations23 Dec 2021 Run-kun Lu, Jian-wei Liu, Xin Zuo

In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by considering each view's dynamic contribution obtained by attention mechanism.

Clustering Multi-view Subspace Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.