no code implementations • 21 Apr 2024 • Mingshan Xie, Yuchen Wang, Haiping Huang
Distinct from human cognitive processing, deep neural networks trained by backpropagation can be easily fooled by adversarial examples.
1 code implementation • 26 Mar 2024 • Shuyu Chang, Rui Wang, Peng Ren, Haiping Huang
Crafting effective topic models for brief texts, like tweets and news headlines, is essential for capturing the swift shifts in social dynamics.
no code implementations • 18 Jan 2024 • Junbin Qiu, Haiping Huang
Here, we treat searching for the steady state as an optimization problem, and construct an approximate potential closely related to the speed of the dynamics, and find that searching for the ground state of this potential is equivalent to running a stochastic gradient dynamics.
1 code implementation • 23 Oct 2023 • Zhanghan Lin, Haiping Huang
Our work thus derives a mode-based learning rule for spiking neural networks.
1 code implementation • 8 Sep 2023 • Chan Li, Junbin Qiu, Haiping Huang
Therefore, our model provides a starting point to investigate the connection among brain computation, next-token prediction and general intelligence.
no code implementations • 4 Sep 2023 • Rui Wang, Xing Liu, Yanan Wang, Haiping Huang
The recently released artificial intelligence conversational agent, ChatGPT, has gained significant attention in academia and real life.
no code implementations • 20 Jun 2023 • Haiping Huang
A good theory of mathematical beauty is more practical than any current observation, as new predictions of physical reality can be verified self-consistently.
no code implementations • 15 May 2023 • Wenxuan Zou, Haiping Huang
Dynamical mean-field theory is a powerful physics tool used to analyze the typical behavior of neural networks, where neurons can be recurrently connected, or multiple layers of neurons can be stacked.
1 code implementation • 6 Dec 2022 • Chan Li, Zhenye Huang, Wenxuan Zou, Haiping Huang
A variational Bayesian learning setting is thus proposed, where the neural networks are trained in a field-space, rather than gradient-ill-defined discrete-weight space, and furthermore, weight uncertainty is naturally incorporated, and modulates synaptic resources among tasks.
no code implementations • 24 Aug 2022 • Zijian Jiang, Ziming Chen, Tianqi Hou, Haiping Huang
Neural networks with recurrent asymmetric couplings are important to understand how episodic memories are encoded in the brain.
no code implementations • 21 Aug 2022 • Chan Li, Haiping Huang
Large-scale deep neural networks consume expensive training costs, but the training results in less-interpretable weight matrices constructing the networks.
no code implementations • 26 Nov 2021 • Yang Zhao, Junbin Qiu, Mingshan Xie, Haiping Huang
Binary perceptron is a fundamental model of supervised learning for the non-convex optimization, which is a root of the popular deep learning.
no code implementations • 21 May 2021 • Rui Wang, Deyu Zhou, Yuxuan Xiong, Haiping Huang
Based on the variational auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian in decoder to incorporate word relatedness.
no code implementations • 7 Feb 2021 • Wenxuan Zou, Chan Li, Haiping Huang
Recurrent neural networks are widely used for modeling spatio-temporal sequences in both nature language processing and neural population dynamics.
1 code implementation • 16 Jul 2020 • Wenxuan Zou, Haiping Huang
Here, we propose a statistical mechanics framework by directly building a least structured model of the high-dimensional weight space, considering realistic structured data, stochastic gradient descent training, and the computational depth of neural networks.
no code implementations • 4 Jul 2020 • Zijian Jiang, Jianwen Zhou, Haiping Huang
Here, we establish a fundamental relationship between geometry of hidden representations (manifold perspective) and the generalization capability of the deep networks.
no code implementations • 20 Jun 2020 • Jianwen Zhou, Haiping Huang
Here we propose a simplified model of dimension reduction, taking into account pairwise correlations among synapses, to reveal the mechanism underlying how the synaptic correlations affect dimension reduction.
no code implementations • 15 Jun 2020 • Yongshuang Liu, Haiping Huang, Fu Xiao, Reza Malekian, Wenming Wang
With the rapid development of Machine Learning technology applied in electroencephalography (EEG) signals, Brain-Computer Interface (BCI) has emerged as a novel and convenient human-computer interaction for smart home, intelligent medical and other Internet of Things (IoT) scenarios.
no code implementations • IEEE Transaction on Network and Service Management 2020 • Haiping Huang, Dongjun Zhang, Kai Wang, Jiateng Wu, Ruchuan Wang
This proposal is composed of five algorithms including random disturbance based on clustering, graph reconstruction after disturbing degree sequence and noise nodes generation, etc.
1 code implementation • 10 Jan 2020 • Chan Li, Haiping Huang
Therefore, our model learns the credit assignment leading to the decision, and predicts an ensemble of sub-networks that can accomplish the same task, thereby providing insights toward understanding the macroscopic behavior of deep learning through the lens of distinct roles of synaptic weights.
no code implementations • 11 Nov 2019 • Haiping Huang
Here, we propose a variational mean-field theory in which the distribution of synaptic weights is considered.
no code implementations • 6 Nov 2019 • Tianqi Hou, Haiping Huang
Here, we propose a statistical physics model of unsupervised learning with prior knowledge, revealing that the sensory inputs drive a series of continuous phase transitions related to spontaneous intrinsic-symmetry breaking.
no code implementations • 30 Apr 2019 • Tianqi Hou, K. Y. Michael Wong, Haiping Huang
Remarkably, we find that the embedded correlation between two receptive fields of hidden units reduces the critical data size.
no code implementations • 4 Oct 2017 • Haiping Huang
Deep neural networks are widely used in various domains.
no code implementations • 2 May 2017 • Haiping Huang, Alireza Goudarzi
Deep learning has become a powerful and popular tool for a variety of machine learning tasks.
no code implementations • 23 Mar 2017 • Haiping Huang
Synapses in real neural circuits can take discrete values, including zero (silent or potential) synapses.
no code implementations • 27 Jan 2017 • Haiping Huang, Taro Toyoizumi
Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a parameter region of better generalization capabilities.
no code implementations • 6 Dec 2016 • Haiping Huang
Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine.
no code implementations • 12 Aug 2016 • Haiping Huang, Taro Toyoizumi
This study deepens our understanding of unsupervised learning from a finite number of data, and may provide insights into its role in training deep networks.
no code implementations • 1 Feb 2015 • Haiping Huang, Taro Toyoizumi
Learning in restricted Boltzmann machine is typically hard due to the computation of gradients of log-likelihood function.
no code implementations • 8 Aug 2014 • Haiping Huang, Yoshiyuki Kabashima
Supervised learning in a binary perceptron is able to classify an extensive number of random patterns by a proper assignment of binary synaptic weights.
no code implementations • 10 Apr 2013 • Haiping Huang, K. Y. Michael Wong, Yoshiyuki Kabashima
The geometrical organization is elucidated by the entropy landscape from a reference configuration and of solution-pairs separated by a given Hamming distance in the solution space.