Search Results for author: Jian-wei Liu

Found 24 papers, 0 papers with code

Online Saddle Point Problem and Online Convex-Concave Optimization

no code implementations12 Dec 2023 Qing-xin Meng, Jian-wei Liu

Centered around solving the Online Saddle Point problem, this paper introduces the Online Convex-Concave Optimization (OCCO) framework, which involves a sequence of two-player time-varying convex-concave games.

Online Continual Learning via the Knowledge Invariant and Spread-out Properties

no code implementations2 Feb 2023 Ya-nan Han, Jian-wei Liu

However, a key challenge in this continual learning paradigm is catastrophic forgetting, namely adapting a model to new tasks often leads to severe performance degradation on prior tasks.

Continual Learning

Online Continual Learning via the Meta-learning Update with Multi-scale Knowledge Distillation and Data Augmentation

no code implementations12 Sep 2022 Ya-nan Han, Jian-wei Liu

In this paper, we overcome these challenges by proposing a novel framework called Meta-learning update via Multi-scale Knowledge Distillation and Data Augmentation (MMKDDA).

Continual Learning Data Augmentation +2

$β$-CapsNet: Learning Disentangled Representation for CapsNet by Information Bottleneck

no code implementations12 Sep 2022 Ming-fei Hu, Jian-wei Liu

We present a framework for learning disentangled representation of CapsNet by information bottleneck constraint that distills information into a compact form and motivates to learn an interpretable factorized capsule.

Disentanglement Variational Inference

Self-supervised Learning for Heterogeneous Graph via Structure Information based on Metapath

no code implementations9 Sep 2022 Shuai Ma, Jian-wei Liu, Xin Zuo

Empirical results validate the performance of SESIM method and demonstrate that this method can improve the representation ability of traditional neural networks on link prediction task and node classification task.

Link Prediction Meta-Learning +3

Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning

no code implementations9 Sep 2022 Ya-nan Han, Jian-wei Liu

Based on this fact, in this paper we propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL), which incorporates an additional efficient channel attention mechanism to pick the particular related knowledge for every task.

Continual Learning Knowledge Distillation

Optimistic Online Convex Optimization in Dynamic Environments

no code implementations28 Mar 2022 Qing-xin Meng, Jian-wei Liu

Existing works have shown that Ader enjoys an $O\left(\sqrt{\left(1+P_T\right)T}\right)$ dynamic regret upper bound, where $T$ is the number of rounds, and $P_T$ is the path length of the reference strategy sequence.

Bilevel Online Deep Learning in Non-stationary Environment

no code implementations25 Jan 2022 Ya-nan Han, Jian-wei Liu, Bing-biao Xiao, Xin-Tan Wang, Xiong-lin Luo

In this paper, we proposed a new Bilevel Online Deep Learning (BODL) framework, which combine bilevel optimization strategy and online ensemble classifier.

Bilevel Optimization

GMM Discriminant Analysis with Noisy Label for Each Class

no code implementations25 Jan 2022 Jian-wei Liu, Zheng-ping Ren, Run-kun Lu, Xiong-lin Luo

Real world datasets often contain noisy labels, and learning from such datasets using standard classification approaches may not produce the desired performance.

Multi-Scale Iterative Refinement Network for RGB-D Salient Object Detection

no code implementations24 Jan 2022 Ze-Yu Liu, Jian-wei Liu, Xin Zuo, Ming-fei Hu

The extensive research leveraging RGB-D information has been exploited in salient object detection.

Object object-detection +2

Online Deep Learning based on Auto-Encoder

no code implementations19 Jan 2022 Si-si Zhang, Jian-wei Liu, Xin Zuo, Run-kun Lu, Si-ming Lian

And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes.

Denoising

Multi-View representation learning in Multi-Task Scene

no code implementations15 Jan 2022 Run-kun Lu, Jian-wei Liu, Si-ming Lian, Xin Zuo

By this way, the original multi-task multi-view data has degenerated into multi-task data, and exploring the correlations among multiple tasks enables to make an improvement on the performance of learning algorithm.

Multi-Task Learning MULTI-VIEW LEARNING +1

Multi-granularity Relabeled Under-sampling Algorithm for Imbalanced Data

no code implementations11 Jan 2022 Qi Dai, Jian-wei Liu, Yang Liu

The Tomek-Link sampling algorithm can effectively reduce the class overlap on data, remove the majority instances that are difficult to distinguish, and improve the algorithm classification accuracy.

Classification imbalanced classification

Auto-Encoder based Co-Training Multi-View Representation Learning

no code implementations9 Jan 2022 Run-kun Lu, Jian-wei Liu, Yuan-Fang Wang, Hao-jie Xie, Xin Zuo

As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.

MULTI-VIEW LEARNING Representation Learning

Partially latent factors based multi-view subspace learning

no code implementations4 Jan 2022 Run-kun Lu, Jian-wei Liu, Ze-Yu Liu, Jin-zhong Chen

To this end, a two stage fusion strategy is proposed to embed representation learning into the process of multi-view subspace clustering.

Clustering Multi-view Subspace Clustering +1

Self-attention Multi-view Representation Learning with Diversity-promoting Complementarity

no code implementations1 Jan 2022 Jian-wei Liu, Xi-hao Ding, Run-kun Lu, Xionglin Luo

Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data.

MULTI-VIEW LEARNING Representation Learning

Multi-view Subspace Adaptive Learning via Autoencoder and Attention

no code implementations1 Jan 2022 Jian-wei Liu, Hao-jie Xie, Run-kun Lu, Xiong-lin Luo

Multi-view learning can cover all features of data samples more comprehensively, so multi-view learning has attracted widespread attention.

Clustering MULTI-VIEW LEARNING

Universal Transformer Hawkes Process with Adaptive Recursive Iteration

no code implementations29 Dec 2021 Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo

Enlighten by transformer model, which can learning sequence data efficiently without recurrent and convolutional structure, transformer Hawkes process is come out, and achieves state-of-the-art performance.

Temporal Attention Augmented Transformer Hawkes Process

no code implementations29 Dec 2021 Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo

With this in mind, we come up with a new kind of Transformer-based Hawkes process model, Temporal Attention Augmented Transformer Hawkes Process (TAA-THP), we modify the traditional dot-product attention structure, and introduce the temporal encoding into attention structure.

speech-recognition Speech Recognition

Attentive Multi-View Deep Subspace Clustering Net

no code implementations23 Dec 2021 Run-kun Lu, Jian-wei Liu, Xin Zuo

In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by considering each view's dynamic contribution obtained by attention mechanism.

Clustering Multi-view Subspace Clustering +1

A Unified Analysis Method for Online Optimization in Normed Vector Space

no code implementations22 Dec 2021 Qing-xin Meng, Jian-wei Liu

This paper studies online optimization from a high-level unified theoretical perspective.

ENHANCE THE DYNAMIC REGRET VIA OPTIMISM

no code implementations29 Sep 2021 Qing-xin Meng, Jian-wei Liu

In particular, if $\widehat{x}_t^*\in\partial\widehat{\varphi}_t$, where $\widehat{\varphi}_t$ represents the estimated convex loss function and $\partial\widehat{\varphi}_t$ is Lipschitz continuous, then the dynamic regret upper bound of OMD has a subgradient variation type.

Cannot find the paper you are looking for? You can Submit a new open access paper.