1 code implementation • 15 Apr 2024 • Dengyu Wu, Yi Qi, Kaiwen Cai, Gaojie Jin, Xinping Yi, Xiaowei Huang
Notably, with STR and cutoff, SNN achieves 2. 14 to 2. 89 faster in inference compared to the pre-configured timestep with near-zero accuracy drop of 0. 50% to 0. 64% over the event-based datasets.
no code implementations • 7 Feb 2024 • Yafei Wang, Xinping Yi, Hongwei Hou, Wenjin Wang, Shi Jin
With the signal model in the presence of channel aging, we formulate the signal-to-noise-plus-interference ratio (SINR) balancing and minimum mean square error (MMSE) problems for robust SLP design.
no code implementations • 17 Jan 2024 • Jingwei Guo, Kaizhu Huang, Xinping Yi, Zixian Su, Rui Zhang
Whilst spectral Graph Neural Networks (GNNs) are theoretically well-founded in the spectral domain, their practical reliance on polynomial approximation implies a profound linkage to the spatial domain.
1 code implementation • 14 Dec 2023 • Jingwei Guo, Kaizhu Huang, Xinping Yi, Rui Zhang
Spectral Graph Neural Networks (GNNs) have achieved tremendous success in graph machine learning, with polynomial filters applied for graph convolutions, where all nodes share the identical filter weights to mine their local contexts.
no code implementations • 10 Dec 2023 • Hongwei Hou, Xuan He, Tianhao Fang, Xinping Yi, Wenjin Wang, Shi Jin
This paper investigates the uplink channel estimation of the millimeter-wave (mmWave) extremely large-scale multiple-input-multiple-output (XL-MIMO) communication system in the beam-delay domain, taking into account the near-field and beam-squint effects due to the transmission bandwidth and array aperture growth.
no code implementations • 16 Oct 2023 • Yafei Wang, Hongwei Hou, Wenjin Wang, Xinping Yi, Shi Jin
It is observed that the received SLP signals do not always follow Gaussian distribution, rendering the conventional soft demodulation with the Gaussian assumption unsuitable for the coded SLP systems.
no code implementations • 11 Oct 2023 • Yafei Wang, Hongwei Hou, Wenjin Wang, Xinping Yi
This paper investigates symbol-level precoding (SLP) for high-order quadrature amplitude modulation (QAM) aimed at minimizing the average symbol error rate (SER), leveraging both constructive interference (CI) and noise power to gain superiority in full signal-to-noise ratio (SNR) ranges.
no code implementations • 3 Oct 2023 • Jiaxu Liu, Xinping Yi, Xiaowei Huang
Hyperbolic graph convolutional networks (HGCN) have demonstrated significant potential in extracting information from hierarchical graphs.
no code implementations • 23 Sep 2023 • Peiwen Jiang, Chao-Kai Wen, Xinping Yi, Xiao Li, Shi Jin, Jun Zhang
Foundation models (FMs), including large language models, have become increasingly popular due to their wide-ranging applicability and ability to understand human-like semantics.
no code implementations • 9 Sep 2023 • Jiaxu Liu, Xinping Yi, Tianle Zhang, Xiaowei Huang
In traditional Graph Neural Networks (GNNs), the assumption of a fixed embedding manifold often limits their adaptability to diverse graph geometries.
1 code implementation • CVPR 2023 • Gaojie Jin, Xinping Yi, Dengyu Wu, Ronghui Mu, Xiaowei Huang
The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversarial training method can flatten loss landscape and find flat minima.
1 code implementation • 23 Jan 2023 • Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang
The Top-K cutoff technique optimises the inference of SNN, and the regularisation are proposed to affect the training and construct SNN with optimised performance for cutoff.
1 code implementation • 27 May 2022 • Jingwei Guo, Kaizhu Huang, Rui Zhang, Xinping Yi
While Graph Neural Networks (GNNs) have achieved enormous success in multiple graph analytical tasks, modern variants mostly rely on the strong inductive bias of homophily.
1 code implementation • CVPR 2022 • Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang
In this paper, we show that treating model weights as random variables allows for enhancing adversarial training through \textbf{S}econd-Order \textbf{S}tatistics \textbf{O}ptimization (S$^2$O) with respect to the weights.
no code implementations • 23 Jan 2022 • Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang
While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.
no code implementations • 22 Jan 2022 • Gaojie Jin, Xinping Yi, Xiaowei Huang
This paper proposes to study neural networks through neuronal correlation, a statistical measure of correlated neuronal activity on the penultimate layer.
no code implementations • 29 Sep 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Bin Gu, Huan Xiong, Xinping Yi
It is possibly due to the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way, so that the adversarial samples are highly biased towards the decision boundary, resulting in an inhomogeneous data distribution.
no code implementations • 24 Aug 2021 • Wenjie Ruan, Xinping Yi, Xiaowei Huang
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.
1 code implementation • 8 Jul 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi
The proposed adversarial training with latent distribution (ATLD) method defends against adversarial attacks by crafting LMAEs with the latent manifold in an unsupervised manner.
1 code implementation • 24 Apr 2021 • Jingwei Guo, Kaizhu Huang, Xinping Yi, Rui Zhang
}, we introduce a novel Local and Global Disentangled Graph Convolutional Network (LGD-GCN) to capture both local and global information for graph disentanglement.
1 code implementation • 1 Mar 2021 • Dengyu Wu, Xinping Yi, Xiaowei Huang
In this paper, we argue that this trend of "energy for accuracy" is not necessary -- a little energy can go a long way to achieve the near-zero accuracy loss.
no code implementations • 29 Jan 2021 • Ya-Chun Liang, Chung-Shou Liao, Xinping Yi
This is a sharp reduction of the general graph re-coloring, whose optimal number of updates scales as the size of the network, thanks to the delicate exploitation of the structural properties of chordal graph classes.
Information Theory Information Theory
no code implementations • NeurIPS 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
1 code implementation • 12 Oct 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
no code implementations • 12 Jun 2020 • Xinping Yi
Although a "wrapping around" operation can transform linear convolution to a circular one, by which the singular values can be approximated with reduced computational complexity by those of a block matrix with doubly circulant blocks, the accuracy of such an approximation is not guaranteed.
no code implementations • 27 Oct 2019 • Chi Wu, Xinping Yi, Wenjin Wang, Li You, Qing Huang, Xiqi Gao
In this paper, we consider the user positioning problem in the massive multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) system with a uniform planner antenna (UPA) array.
no code implementations • 18 Dec 2018 • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks.