no code implementations • 12 Dec 2023 • Qing-xin Meng, Jian-wei Liu
Centered around solving the Online Saddle Point problem, this paper introduces the Online Convex-Concave Optimization (OCCO) framework, which involves a sequence of two-player time-varying convex-concave games.
no code implementations • 2 Feb 2023 • Ya-nan Han, Jian-wei Liu
However, a key challenge in this continual learning paradigm is catastrophic forgetting, namely adapting a model to new tasks often leads to severe performance degradation on prior tasks.
no code implementations • 12 Sep 2022 • Ya-nan Han, Jian-wei Liu
In this paper, we overcome these challenges by proposing a novel framework called Meta-learning update via Multi-scale Knowledge Distillation and Data Augmentation (MMKDDA).
no code implementations • 12 Sep 2022 • Ming-fei Hu, Jian-wei Liu
We present a framework for learning disentangled representation of CapsNet by information bottleneck constraint that distills information into a compact form and motivates to learn an interpretable factorized capsule.
no code implementations • 9 Sep 2022 • Shuai Ma, Jian-wei Liu, Xin Zuo
Empirical results validate the performance of SESIM method and demonstrate that this method can improve the representation ability of traditional neural networks on link prediction task and node classification task.
no code implementations • 9 Sep 2022 • Ya-nan Han, Jian-wei Liu
Based on this fact, in this paper we propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL), which incorporates an additional efficient channel attention mechanism to pick the particular related knowledge for every task.
no code implementations • 28 Mar 2022 • Qing-xin Meng, Jian-wei Liu
Existing works have shown that Ader enjoys an $O\left(\sqrt{\left(1+P_T\right)T}\right)$ dynamic regret upper bound, where $T$ is the number of rounds, and $P_T$ is the path length of the reference strategy sequence.
no code implementations • 25 Jan 2022 • Ya-nan Han, Jian-wei Liu, Bing-biao Xiao, Xin-Tan Wang, Xiong-lin Luo
In this paper, we proposed a new Bilevel Online Deep Learning (BODL) framework, which combine bilevel optimization strategy and online ensemble classifier.
no code implementations • 25 Jan 2022 • Jian-wei Liu, Zheng-ping Ren, Run-kun Lu, Xiong-lin Luo
Real world datasets often contain noisy labels, and learning from such datasets using standard classification approaches may not produce the desired performance.
no code implementations • 24 Jan 2022 • Ze-Yu Liu, Jian-wei Liu, Xin Zuo, Ming-fei Hu
The extensive research leveraging RGB-D information has been exploited in salient object detection.
no code implementations • 19 Jan 2022 • Si-si Zhang, Jian-wei Liu, Xin Zuo, Run-kun Lu, Si-ming Lian
And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes.
no code implementations • 15 Jan 2022 • Run-kun Lu, Jian-wei Liu, Si-ming Lian, Xin Zuo
By this way, the original multi-task multi-view data has degenerated into multi-task data, and exploring the correlations among multiple tasks enables to make an improvement on the performance of learning algorithm.
no code implementations • 11 Jan 2022 • Qi Dai, Jian-wei Liu, Yang Liu
The Tomek-Link sampling algorithm can effectively reduce the class overlap on data, remove the majority instances that are difficult to distinguish, and improve the algorithm classification accuracy.
no code implementations • 9 Jan 2022 • Run-kun Lu, Jian-wei Liu, Yuan-Fang Wang, Hao-jie Xie, Xin Zuo
As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.
no code implementations • 8 Jan 2022 • Jian-wei Liu, Yuan-Fang Wang, Run-kun Lu, Xionglin Luo
But not all of this information is useful for classification tasks.
no code implementations • 5 Jan 2022 • Si-si Zhang, Jian-wei Liu, Xin Zuo
The last one we often ignore is the learning of the latent representation.
no code implementations • 4 Jan 2022 • Run-kun Lu, Jian-wei Liu, Ze-Yu Liu, Jin-zhong Chen
To this end, a two stage fusion strategy is proposed to embed representation learning into the process of multi-view subspace clustering.
no code implementations • 1 Jan 2022 • Jian-wei Liu, Xi-hao Ding, Run-kun Lu, Xionglin Luo
Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data.
no code implementations • 1 Jan 2022 • Jian-wei Liu, Hao-jie Xie, Run-kun Lu, Xiong-lin Luo
Multi-view learning can cover all features of data samples more comprehensively, so multi-view learning has attracted widespread attention.
no code implementations • 29 Dec 2021 • Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo
Enlighten by transformer model, which can learning sequence data efficiently without recurrent and convolutional structure, transformer Hawkes process is come out, and achieves state-of-the-art performance.
no code implementations • 29 Dec 2021 • Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo
With this in mind, we come up with a new kind of Transformer-based Hawkes process model, Temporal Attention Augmented Transformer Hawkes Process (TAA-THP), we modify the traditional dot-product attention structure, and introduce the temporal encoding into attention structure.
no code implementations • 23 Dec 2021 • Run-kun Lu, Jian-wei Liu, Xin Zuo
In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by considering each view's dynamic contribution obtained by attention mechanism.
no code implementations • 22 Dec 2021 • Qing-xin Meng, Jian-wei Liu
This paper studies online optimization from a high-level unified theoretical perspective.
no code implementations • 29 Sep 2021 • Qing-xin Meng, Jian-wei Liu
In particular, if $\widehat{x}_t^*\in\partial\widehat{\varphi}_t$, where $\widehat{\varphi}_t$ represents the estimated convex loss function and $\partial\widehat{\varphi}_t$ is Lipschitz continuous, then the dynamic regret upper bound of OMD has a subgradient variation type.