Search Results for author: Zhaofei Yu

Found 40 papers, 16 papers with code

SpikingResformer: Bridging ResNet and Vision Transformer in Spiking Neural Networks

2 code implementations21 Mar 2024 Xinyu Shi, Zecheng Hao, Zhaofei Yu

Based on DSSA, we propose a novel spiking Vision Transformer architecture called SpikingResformer, which combines the ResNet-based multi-stage architecture with our proposed DSSA to improve both performance and energy efficiency while reducing parameters.

SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams

1 code implementation14 Mar 2024 Kang Chen, Shiyan Chen, Jiyuan Zhang, Baoyue Zhang, Yajing Zheng, Tiejun Huang, Zhaofei Yu

Our approach begins with the formulation of a spike-guided deblurring model that explores the theoretical relationships among spike streams, blurry images, and their corresponding sharp sequences.

Deblurring Knowledge Distillation +1

LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through Learnable Multi-hierarchical Threshold Model

no code implementations1 Feb 2024 Zecheng Hao, Xinyu Shi, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang

Compared to traditional Artificial Neural Network (ANN), Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information in a more biological-inspired and energy-efficient manner.

Deep Learning for Visual Neuroprosthesis

no code implementations8 Jan 2024 Peter Beech, Shanshan Jia, Zhaofei Yu, Jian K. Liu

The visual pathway involves complex networks of cells and regions which contribute to the encoding and processing of visual information.

Deep Pulse-Coupled Neural Networks

no code implementations24 Dec 2023 Zexiang Yi, Jing Lian, Yunliang Qi, Zhaofei Yu, Huajin Tang, Yide Ma, Jizhao Liu

In this work, we leverage a more biologically plausible neural model with complex dynamics, i. e., a pulse-coupled neural network (PCNN), to improve the expressiveness and recognition performance of SNNs for vision tasks.

INeAT: Iterative Neural Adaptive Tomography

no code implementations3 Nov 2023 Bo Xiong, Changqing Su, Zihan Lin, You Zhou, Zhaofei Yu

Here, we propose a neural rendering method for CT reconstruction, named Iterative Neural Adaptive Tomography (INeAT), which incorporates iterative posture optimization to effectively counteract the influence of posture perturbations in data, particularly in cases involving significant posture variations.

Computed Tomography (CT) Neural Rendering

SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence

1 code implementation25 Oct 2023 Wei Fang, Yanqi Chen, Jianhao Ding, Zhaofei Yu, Timothée Masquelier, Ding Chen, Liwei Huang, Huihui Zhou, Guoqi Li, Yonghong Tian

Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties.

Code Generation

Unveiling the Potential of Spike Streams for Foreground Occlusion Removal from Densely Continuous Views

no code implementations3 Jul 2023 Jiyuan Zhang, Shiyan Chen, Yajing Zheng, Zhaofei Yu, Tiejun Huang

To process the spikes, we build a novel model \textbf{SpkOccNet}, in which we integrate information of spikes from continuous viewpoints within multi-windows, and propose a novel cross-view mutual attention mechanism for effective fusion and refinement.

Spike timing reshapes robustness against attacks in spiking neural networks

no code implementations9 Jun 2023 Jianhao Ding, Zhaofei Yu, Tiejun Huang, Jian K. Liu

The success of deep learning in the past decade is partially shrouded in the shadow of adversarial attacks.

One Forward is Enough for Neural Network Training via Likelihood Ratio Method

no code implementations15 May 2023 Jinyang Jiang, Zeliang Zhang, Chenliang Xu, Zhaofei Yu, Yijie Peng

While backpropagation (BP) is the mainstream approach for gradient computation in neural network training, its heavy reliance on the chain rule of differentiation constrains the designing flexibility of network architecture and training pipelines.

Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies

1 code implementation NeurIPS 2023 Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothée Masquelier, Yonghong Tian

Vanilla spiking neurons in Spiking Neural Networks (SNNs) use charge-fire-reset neuronal dynamics, which can only be simulated serially and can hardly learn long-time dependencies.

Spike Stream Denoising via Spike Camera Simulation

no code implementations6 Apr 2023 Liwen Hu, Lei Ma, Zhaofei Yu, Boxin Shi, Tiejun Huang

Based on our noise model, the first benchmark for spike stream denoising is proposed which includes clear (noisy) spike stream.

Denoising

Exploring Efficient Asymmetric Blind-Spots for Self-Supervised Denoising in Real-World Scenarios

no code implementations29 Mar 2023 Shiyan Chen, Jiyuan Zhang, Zhaofei Yu, Tiejun Huang

Based on this, we propose Asymmetric Tunable Blind-Spot Network (AT-BSN), where the blind-spot size can be freely adjusted, thus better balancing noise correlation suppression and image local spatial destruction during training and inference.

Denoising

SpikeCV: Open a Continuous Computer Vision Era

1 code implementation21 Mar 2023 Yajing Zheng, Jiyuan Zhang, Rui Zhao, Jianhao Ding, Shiyan Chen, Ruiqin Xiong, Zhaofei Yu, Tiejun Huang

SpikeCV focuses on encapsulation for spike data, standardization for dataset interfaces, modularization for vision tasks, and real-time applications for challenging scenes.

A Unified Framework for Soft Threshold Pruning

1 code implementation25 Feb 2023 Yanqi Chen, Zhengyu Ma, Wei Fang, Xiawu Zheng, Zhaofei Yu, Yonghong Tian

In this work, we reformulate soft threshold pruning as an implicit optimization problem solved using the Iterative Shrinkage-Thresholding Algorithm (ISTA), a classic method from the fields of sparse recovery and compressed sensing.

Scheduling

Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes

2 code implementations21 Feb 2023 Zecheng Hao, Jianhao Ding, Tong Bu, Tiejun Huang, Zhaofei Yu

The experimental results show that our proposed method achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet datasets.

A Novel Noise Injection-based Training Scheme for Better Model Robustness

no code implementations17 Feb 2023 Zeliang Zhang, Jinyang Jiang, Minjie Chen, Zhiyuan Wang, Yijie Peng, Zhaofei Yu

Noise injection-based method has been shown to be able to improve the robustness of artificial neural networks in previous work.

Adversarial Robustness Computational Efficiency

Reducing ANN-SNN Conversion Error through Residual Membrane Potential

2 code implementations4 Feb 2023 Zecheng Hao, Tong Bu, Jianhao Ding, Tiejun Huang, Zhaofei Yu

Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.

Temporal Sequences

Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks

1 code implementation CVPR 2023 Tong Bu, Jianhao Ding, Zecheng Hao, Zhaofei Yu

Spiking Neural Networks (SNNs) have attracted significant attention due to their energy-efficient properties and potential application on neuromorphic hardware.

Image Classification

Deep Spike Learning with Local Classifiers

1 code implementation IEEE Transactions on Cybernetics 2022 Chenxiang Ma, Rui Yan, Zhaofei Yu, Qiang Yu

We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively.

Optimized Potential Initialization for Low-latency Spiking Neural Networks

no code implementations3 Feb 2022 Tong Bu, Jianhao Ding, Zhaofei Yu, Tiejun Huang

We evaluate our algorithm on the CIFAR-10, CIFAR-100 and ImageNet datasets and achieve state-of-the-art accuracy, using fewer time-steps.

Adversarial Robustness

1000x Faster Camera and Machine Vision with Ordinary Devices

no code implementations23 Jan 2022 Tiejun Huang, Yajing Zheng, Zhaofei Yu, Rui Chen, Yuan Li, Ruiqin Xiong, Lei Ma, Junwei Zhao, Siwei Dong, Lin Zhu, Jianing Li, Shanshan Jia, Yihua Fu, Boxin Shi, Si Wu, Yonghong Tian

By treating vidar as spike trains in biological vision, we have further developed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1, 000x faster than human vision.

object-detection Object Detection

Accelerating Training of Deep Spiking Neural Networks with Parameter Initialization

no code implementations29 Sep 2021 Jianhao Ding, Jiyuan Zhang, Zhaofei Yu, Tiejun Huang

Despite that spiking neural networks (SNNs) show strong advantages in information encoding, power consuming, and computational capability, the underdevelopment of supervised learning algorithms is still a hindrance for training SNN.

Spatio-Temporal Recurrent Networks for Event-Based Optical Flow Estimation

1 code implementation10 Sep 2021 Ziluo Ding, Rui Zhao, Jiyuan Zhang, Tianxiao Gao, Ruiqin Xiong, Zhaofei Yu, Tiejun Huang

Recently, many deep learning methods have shown great success in providing promising solutions to many event-based problems, such as optical flow estimation.

Event-based Optical Flow Optical Flow Estimation +1

High-Speed Image Reconstruction Through Short-Term Plasticity for Spiking Cameras

no code implementations CVPR 2021 Yajing Zheng, Lingxiao Zheng, Zhaofei Yu, Boxin Shi, Yonghong Tian, Tiejun Huang

Mimicking the sampling mechanism of the fovea, a retina-inspired camera, named spiking camera, is developed to record the external information with a sampling rate of 40, 000 Hz, and outputs asynchronous binary spike streams.

Image Reconstruction Vocal Bursts Intensity Prediction

Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks

1 code implementation25 May 2021 Jianhao Ding, Zhaofei Yu, Yonghong Tian, Tiejun Huang

We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference.

Pruning of Deep Spiking Neural Networks through Gradient Rewiring

1 code implementation11 May 2021 Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian

Our key innovation is to redefine the gradient to a new synaptic parameter, allowing better exploration of network structures by taking full advantage of the competition between pruning and regrowth of connections.

Deep Residual Learning in Spiking Neural Networks

1 code implementation NeurIPS 2021 Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, Yonghong Tian

Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning.

Super Resolve Dynamic Scene From Continuous Spike Streams

no code implementations ICCV 2021 Jing Zhao, Jiyu Xie, Ruiqin Xiong, Jian Zhang, Zhaofei Yu, Tiejun Huang

In this paper, we properly exploit the relative motion and derive the relationship between light intensity and each spike, so as to recover the external scene with both high temporal and high spatial resolution.

Super-Resolution

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

1 code implementation ICCV 2021 Wei Fang, Zhaofei Yu, Yanqi Chen, Timothee Masquelier, Tiejun Huang, Yonghong Tian

In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs.

Image Classification

Reconstruction of Natural Visual Scenes from Neural Spikes with Deep Neural Networks

no code implementations30 Apr 2019 Yichen Zhang, Shanshan Jia, Yajing Zheng, Zhaofei Yu, Yonghong Tian, Siwei Ma, Tiejun Huang, Jian. K. Liu

The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion.

Probabilistic Inference of Binary Markov Random Fields in Spiking Neural Networks through Mean-field Approximation

no code implementations22 Feb 2019 Yajing Zheng, Shanshan Jia, Zhaofei Yu, Tiejun Huang, Jian. K. Liu, Yonghong Tian

Recent studies have suggested that the cognitive process of the human brain is realized as probabilistic inference and can be further modeled by probabilistic graphical models like Markov random fields.

Image Denoising valid

Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks

no code implementations6 Nov 2018 Qi Yan, Yajing Zheng, Shanshan Jia, Yichen Zhang, Zhaofei Yu, Feng Chen, Yonghong Tian, Tiejun Huang, Jian. K. Liu

When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possible neuroscience underpinnings due to highly complex circuits from the retina to higher visual cortex.

Transfer Learning

Neural System Identification with Spike-triggered Non-negative Matrix Factorization

no code implementations12 Aug 2018 Shanshan Jia, Zhaofei Yu, Arno Onken, Yonghong Tian, Tiejun Huang, Jian. K. Liu

Furthermore, we show that STNMF can separate spikes of a ganglion cell into a few subsets of spikes where each subset is contributed by one presynaptic bipolar cell.

Winner-Take-All as Basic Probabilistic Inference Unit of Neuronal Circuits

no code implementations2 Aug 2018 Zhaofei Yu, Yonghong Tian, Tiejun Huang, Jian. K. Liu

Taken together, our results suggest that the WTA circuit could be seen as the minimal inference unit of neuronal circuits.

Bayesian Inference

Revealing structure components of the retina by deep learning networks

no code implementations8 Nov 2017 Qi Yan, Zhaofei Yu, Feng Chen, Jian. K. Liu

By training CNNs with white noise images to predicate neural responses, we found that convolutional filters learned in the end are resembling to biological components of the retinal circuit.

CaMKII activation supports reward-based neural network optimization through Hamiltonian sampling

no code implementations1 Jun 2016 Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass

Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.

Sampling-based Causal Inference in Cue Combination and its Neural Implementation

no code implementations3 Sep 2015 Zhaofei Yu, Feng Chen, Jianwu Dong, Qionghai Dai

Although the Bayesian causal inference model explains the problem of causal inference in cue combination successfully, how causal inference in cue combination could be implemented by neural circuits, is unclear.

Causal Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.