1 code implementation • 14 May 2024 • Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen
With the advancement of neural networks, diverse methods for neural Granger causality have emerged, which demonstrate proficiency in handling complex data, and nonlinear relationships.
no code implementations • 30 Apr 2024 • Wanqi Zhou, Shuanghao Bai, Qibin Zhao, Badong Chen
Pretrained vision-language models (VLMs) like CLIP have shown impressive generalization performance across various downstream tasks, yet they remain vulnerable to adversarial attacks.
1 code implementation • 13 Apr 2024 • Binghua Li, Jie Mao, Zhe Sun, Chao Li, Qibin Zhao, Toshihisa Tanaka
Specifically, we introduce a concise multi-scale module to merge attentive features from quadruplet attention layers, and produces attribution maps.
no code implementations • 24 Mar 2024 • Guang Lin, Zerui Tao, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
We propose a novel robust reverse process with adversarial guidance, which is independent of given pre-trained DMs and avoids retraining or fine-tuning the DMs.
no code implementations • 15 Mar 2024 • Wuyang Zhou, Yu-Bang Zheng, Qibin Zhao, Danilo Mandic
A novel tensor decomposition framework, termed Tensor Star (TS) decomposition, is proposed which represents a new type of tensor network decomposition based on tensor contractions.
no code implementations • 4 Feb 2024 • Junhua Zeng, Guoxu Zhou, Chao Li, Zhun Sun, Qibin Zhao
Tensor network structure search (TN-SS), aiming at searching for suitable tensor network (TN) structures in representing high-dimensional problems, largely promotes the efficacy of TN in various machine learning applications.
no code implementations • 29 Jan 2024 • Guang Lin, Chao Li, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
The deep neural networks are known to be vulnerable to well-designed adversarial attacks.
1 code implementation • 15 Jan 2024 • Zerui Tao, Toshihisa Tanaka, Qibin Zhao
Finally, to address the computational issue of GPs, we enhance the model by incorporating sparse orthogonal variational inference of inducing points, which offers a more effective covariance approximation within GPs and stochastic natural gradient updates for nonparametric models.
no code implementations • 11 Jan 2024 • Xuyang Zhao, Qibin Zhao, Toshihisa Tanaka
Based on those powerful LLMs, the model fine-tuned with domain-specific datasets posseses more specialized knowledge and thus is more practical like medical LLMs.
no code implementations • 6 Sep 2023 • Zhiqi Shao, Dai Shi, Andi Han, Yi Guo, Qibin Zhao, Junbin Gao
To explore more flexible filtering conditions, we further generalize MHKG into a model termed G-MHKG and thoroughly show the roles of each element in controlling over-smoothing, over-squashing and expressive power.
no code implementations • 3 Jul 2023 • Qi Jiang, Guoxu Zhou, Qibin Zhao
Concept Factorization (CF), as a novel paradigm of representation learning, has demonstrated superior performance in multi-view clustering tasks.
no code implementations • 25 May 2023 • Dai Shi, Zhiqi Shao, Yi Guo, Qibin Zhao, Junbin Gao
We conduct a convergence analysis on pL-UFG, addressing the gap in the understanding of its asymptotic behaviors.
no code implementations • 24 May 2023 • Yu-Bang Zheng, Xi-Le Zhao, Junhua Zeng, Chao Li, Qibin Zhao, Heng-Chao Li, Ting-Zhu Huang
To address this issue, we propose a novel TN paradigm, named SVD-inspired TN decomposition (SVDinsTN), which allows us to efficiently solve the TN-SS problem from a regularized modeling perspective, eliminating the repeated structure evaluations.
1 code implementation • 25 Apr 2023 • Chao Li, Junhua Zeng, Chunmei Li, Cesar Caiafa, Qibin Zhao
Tensor network (TN) is a powerful framework in machine learning, but selecting a good TN model, known as TN structure search (TN-SS), is a challenging and computationally intensive task.
1 code implementation • NeurIPS 2023 • Andong Wang, Chao Li, Mingyuan Bai, Zhong Jin, Guoxu Zhou, Qibin Zhao
Our analysis indicates that the transformed low-rank parameterization can promisingly enhance robust generalization for t-NNs.
no code implementations • 27 Nov 2022 • Yichun Qiu, Weijun Sun, Guoxu Zhou, Qibin Zhao
Efficient and accurate low-rank approximation (LRA) methods are of great significance for large-scale data analysis.
no code implementations • 7 Oct 2022 • Peilin Yang, Weijun Sun, Qibin Zhao, Guoxu Zhou
The prevalent fully-connected tensor network (FCTN) has achieved excellent success to compress data.
1 code implementation • 14 Jun 2022 • Chao Li, Junhua Zeng, Zerui Tao, Qibin Zhao
Recent works put much effort into tensor network structure search (TN-SS), aiming to select suitable tensor network (TN) structures, involving the TN-ranks, formats, and so on, for the decomposition or learning tasks.
1 code implementation • 2 Jun 2022 • Reinmar J Kobler, Jun-Ichiro Hirayama, Qibin Zhao, Motoaki Kawanabe
To achieve this, we propose a new building block for geometric deep learning, which we denote SPD domain-specific momentum batch normalization (SPDDSMBN).
no code implementations • 14 Mar 2022 • Yuning Qiu, Guoxu Zhou, Qibin Zhao, Shengli Xie
Experimental results on both synthetic and real-world data demonstrate the effectiveness and efficiency of the proposed model in recovering noisy incomplete tensor data compared with state-of-the-art tensor completion models.
no code implementations • 3 Jan 2022 • Yuyuan Yu, Guoxu Zhou, Haonan Huang, Shengli Xie, Qibin Zhao
However, existing strategies cannot take advantage of semi-supervised information, only distinguishing the importance of views from a data feature perspective, which is often influenced by low-quality views then leading to poor performance.
1 code implementation • 19 Oct 2021 • Tenghui Li, Guoxu Zhou, Yuning Qiu, Qibin Zhao
We make an attempt to understanding convolutional neural network by exploring the relationship between (deep) convolutional neural networks and Volterra convolutions.
no code implementations • 29 Sep 2021 • Jianfu Zhang, Yan Hong, Dawei Cheng, Liqing Zhang, Qibin Zhao
In this paper, we propose a tensor-based framework for GNNs to learn robust graphs from adversarial graphs by aggregating predefined robust graphs to enhance the robustness of GNNs via tensor approximation.
no code implementations • 29 Sep 2021 • Jianfu Zhang, Yan Hong, Liqing Zhang, Qibin Zhao
Graph Neural Networks (GNNs) are fragile to adversarial attacks.
no code implementations • 6 Sep 2021 • Xinhai Zhao, Yuyuan Yu, Guoxu Zhou, Qibin Zhao, Weijun Sun
For the high dimensional data representation, nonnegative tensor ring (NTR) decomposition equipped with manifold learning has become a promising model to exploit the multi-dimensional structure and extract the feature from tensor data.
1 code implementation • ACL 2021 • Jiajia Tang, Kang Li, Xuanyu Jin, Andrzej Cichocki, Qibin Zhao, Wanzeng Kong
In this work, the coupled-translation fusion network (CTFN) is firstly proposed to model bi-direction interplay via couple learning, ensuring the robustness in respect to missing modalities.
no code implementations • NeurIPS 2021 • Chao Li, Junhua Zeng, Zerui Tao, Qibin Zhao
Recent works paid effort on the structure search issue for tensor network (TN) representation, of which the aim is to select the optimal network for TN contraction to fit a tensor.
1 code implementation • 2 Mar 2021 • Hejia Qiu, Chao Li, Ying Weng, Zhun Sun, Xingyu He, Qibin Zhao
Tensor-power (TP) recurrent model is a family of non-linear dynamical systems, of which the recurrence relation consists of a p-fold (a. k. a., degree-p) tensor product.
1 code implementation • 28 Nov 2020 • Cesar F. Caiafa, Ziyao Wang, Jordi Solé-Casals, Qibin Zhao
A new supervised learning method is developed to train a general classifier, such as a logistic regression or a deep neural network, using only a subset of features per sample, while assuming sparse representations of data vectors on an unknown dictionary.
1 code implementation • 24 Oct 2020 • wei he, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao, Hongyan zhang, Liangpei Zhang
Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) restoration, which includes the tasks of denoising, compressed HSI reconstruction and inpainting.
no code implementations • 12 Oct 2020 • Yuyuan Yu, Guoxu Zhou, Ning Zheng, Shengli Xie, Qibin Zhao
Tensor ring (TR) decomposition is a powerful tool for exploiting the low-rank nature of multiway data and has demonstrated great potential in a variety of important applications.
no code implementations • 29 Jan 2020 • Zihao Huang, Chao Li, Feng Duan, Qibin Zhao
It is a challenging task to restore images from their variants with combined distortions.
no code implementations • 6 Jan 2020 • Wei He, Yong Chen, Naoto Yokoya, Chao Li, Qibin Zhao
In this paper, we propose a new model, named coupled tensor ring factorization (CTRF), for HSR.
no code implementations • NeurIPS 2019 • Ming Hou, Jiajia Tang, Jianhai Zhang, Wanzeng Kong, Qibin Zhao
Tensor-based multimodal fusion techniques have exhibited great predictive performance.
no code implementations • 25 Sep 2019 • Cesar F. Caiafa, Ziyao Wang, Jordi Solé-Casals, Qibin Zhao
This paper addresses the problem of training a classifier on incomplete data and its application to a complete or incomplete test dataset.
no code implementations • 25 Sep 2019 • Tatsuya Yokota, Hidekata Hontani, Qibin Zhao, Andrzej Cichocki
The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity.
1 code implementation • 8 Aug 2019 • Tatsuya Yokota, Hidekata Hontani, Qibin Zhao, Andrzej Cichocki
The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity.
no code implementations • ACL 2019 • Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency
Our method is based on the observation that high-dimensional multimodal time series data often exhibit correlations across time and modalities which leads to low-rank tensor representations.
no code implementations • ICLR 2019 • Xinqi Chen, Ming Hou, Guoxu Zhou, Qibin Zhao
Recent deep multi-task learning (MTL) has been witnessed its success in alleviating data scarcity of some task by utilizing domain-specific knowledge from related tasks.
no code implementations • 21 Mar 2019 • Jinshi Yu, Chao Li, Qibin Zhao, Guoxu Zhou
Tensor ring (TR) decomposition has been successfully used to obtain the state-of-the-art performance in the visual data completion problem.
no code implementations • 14 Mar 2019 • Giuseppe G. Calvi, Ahmad Moniri, Mahmoud Mahfouz, Qibin Zhao, Danilo P. Mandic
This is achieved through a tensor valued approach, based on the proposed Tucker Tensor Layer (TTL), as an alternative to the dense weight-matrices of DNNs.
no code implementations • 7 Jan 2019 • Longhao Yuan, Chao Li, Jianting Cao, Qibin Zhao
Dimensionality reduction is an essential technique for multi-way large-scale data, i. e., tensor.
2 code implementations • CVPR 2019 • Wei He, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao
This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI.
Ranked #10 on Hyperspectral Image Denoising on ICVL-HSI-Gaussian50
no code implementations • 30 Nov 2018 • Tomasz M. Rutkowski, Qibin Zhao, Masao S. Abe, Mihoko Otake
Dementia and especially Alzheimer's disease (AD) are the most common causes of cognitive decline in elderly people.
no code implementations • 31 Oct 2018 • Chao Li, Zhun Sun, Jinshi Yu, Ming Hou, Qibin Zhao
We demonstrate this by compressing the convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a standard classification task using CIFAR-10.
no code implementations • 7 Sep 2018 • Longhao Yuan, Chao Li, Danilo Mandic, Jianting Cao, Qibin Zhao
In this paper, by exploiting the low-rank structure of the TR latent space, we propose a novel tensor completion method which is robust to model selection.
no code implementations • 13 Jun 2018 • Jordi Sole-Casals, Cesar F. Caiafa, Qibin Zhao, Adrzej Cichocki
For the random missing channels case, we show that tensor completion algorithms help to reconstruct missing channels, significantly improving the accuracy in the classification of motor imagery, however, not at the same level as clean data.
no code implementations • 22 May 2018 • Longhao Yuan, Chao Li, Danilo Mandic, Jianting Cao, Qibin Zhao
In low-rank tensor completion tasks, due to the underlying multiple large-scale singular value decomposition (SVD) operations and rank selection problem of the traditional methods, they suffer from high computational cost and high sensitivity of model complexity.
no code implementations • 22 May 2018 • Chao Li, Mohammad Emtiyaz Khan, Zhun Sun, Gang Niu, Bo Han, Shengli Xie, Qibin Zhao
Exact recovery of tensor decomposition (TD) methods is a desirable property in both unsupervised learning and scientific data analysis.
1 code implementation • 5 Apr 2018 • Longhao Yuan, Qibin Zhao, Lihua Gui, Jianting Cao
We propose two TT-based algorithms: Tensor Train Weighted Optimization (TT-WOPT) and Tensor Train Stochastic Gradient Descent (TT-SGD) to optimize TT decomposition factors.
no code implementations • 21 Nov 2017 • Ming Hou, Brahim Chaib-Draa, Chao Li, Qibin Zhao
However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks.
no code implementations • 7 Nov 2017 • Longhao Yuan, Qibin Zhao, Jianting Cao
In this paper, we aim at the problem of tensor data completion.
1 code implementation • 30 Oct 2017 • Xingwei Cao, Xuyang Zhao, Qibin Zhao
Generative Adversarial Network (GAN) and its variants exhibit state-of-the-art performance in the class of generative models.
1 code implementation • 8 Sep 2017 • Longhao Yuan, Qibin Zhao, Jianting Cao
In this paper, we aim at the completion problem of high order tensor data with missing entries.
1 code implementation • 17 Jun 2016 • Qibin Zhao, Guoxu Zhou, Shengli Xie, Liqing Zhang, Andrzej Cichocki
In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition.
no code implementations • 29 Aug 2015 • Guoxu Zhou, Qibin Zhao, Yu Zhang, Tülay Adalı, Shengli Xie, Andrzej Cichocki
With the increasing availability of various sensor technologies, we now have access to large amounts of multi-block (also called multi-set, multi-relational, or multi-view) data that need to be jointly analyzed to explore their latent connections.
no code implementations • 25 May 2015 • Tatsuya Yokota, Qibin Zhao, Andrzej Cichocki
The proposed method admits significant advantages, owing to the integration of smooth PARAFAC decomposition for incomplete tensors and the efficient selection of models in order to minimize the tensor rank.
1 code implementation • 10 May 2015 • Qibin Zhao, Liqing Zhang, Andrzej Cichocki
Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion.
no code implementations • 9 Oct 2014 • Qibin Zhao, Guoxu Zhou, Liqing Zhang, Andrzej Cichocki, Shun-ichi Amari
We propose a generative model for robust tensor factorization in the presence of both missing data and outliers.
no code implementations • 17 Apr 2014 • Guoxu Zhou, Andrzej Cichocki, Qibin Zhao, Shengli Xie
Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data.
1 code implementation • 25 Jan 2014 • Qibin Zhao, Liqing Zhang, Andrzej Cichocki
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors.
1 code implementation • 5 Jul 2012 • Qibin Zhao, Cesar F. Caiafa, Danilo P. Mandic, Zenas C. Chao, Yasuo Nagasaka, Naotaka Fujii, Liqing Zhang, Andrzej Cichocki
A new generalized multilinear regression model, termed the Higher-Order Partial Least Squares (HOPLS), is introduced with the aim to predict a tensor (multiway array) $\tensor{Y}$ from a tensor $\tensor{X}$ through projecting the data onto the latent space and performing regression on the corresponding latent variables.