Search Results for author: Zixuan Jiang

Found 12 papers, 6 papers with code

An Efficient Training Framework for Reversible Neural Architectures

no code implementations ECCV 2020 Zixuan Jiang, Keren Zhu, Mingjie Liu, Jiaqi Gu, David Z. Pan

In this work, we formulate the decision problem for reversible operators with training time as the objective function and memory usage as the constraint.

M3ICRO: Machine Learning-Enabled Compact Photonic Tensor Core based on PRogrammable Multi-Operand Multimode Interference

1 code implementation31 May 2023 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

The programmable MOMMI leverages the intrinsic light propagation principle, providing a single-device programmable matrix unit beyond the conventional computing paradigm of one multiply-accumulate (MAC) operation per device.

Pre-RMSNorm and Pre-CRMSNorm Transformers: Equivalent and Efficient Pre-LN Transformers

1 code implementation NeurIPS 2023 Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, David Z. Pan

Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.

TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient Vision Transformer Scaling and Searching

no code implementations ICCV 2023 Cheng Fu, Hanxian Huang, Zixuan Jiang, Yun Ni, Lifeng Nai, Gang Wu, Liqun Cheng, Yanqi Zhou, Sheng Li, Andrew Li, Jishen Zhao

One promising way to accelerate transformer training is to reuse small pretrained models to initialize the transformer, as their existing representation power facilitates faster model convergence.

Knowledge Distillation Neural Architecture Search

PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based on Predictive Coding in Spiking Neural Networks

no code implementations24 Nov 2022 Mengting Lan, Xiaogang Xiong, Zixuan Jiang, Yunjiang Lou

Deemed as the third generation of neural networks, the event-driven Spiking Neural Networks(SNNs) combined with bio-plausible local learning rules make it promising to build low-power, neuromorphic hardware for SNNs.

Delving into Effective Gradient Matching for Dataset Condensation

1 code implementation30 Jul 2022 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, David Z. Pan

In this work, we delve into the gradient matching method from a comprehensive perspective and answer the critical questions of what, how, and where to match.

Dataset Condensation

ELight: Enabling Efficient Photonic In-Memory Neurocomputing with Life Enhancement

no code implementations15 Dec 2021 Hanqing Zhu, Jiaqi Gu, Chenghao Feng, Mingjie Liu, Zixuan Jiang, Ray T. Chen, David Z. Pan

With the recent advances in optical phase change material (PCM), photonic in-memory neurocomputing has demonstrated its superiority in optical neural network (ONN) designs with near-zero static power consumption, time-of-light latency, and compact footprint.

L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization

1 code implementation NeurIPS 2021 Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David Z. Pan

In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning.

Optimizer Fusion: Efficient Training with Better Locality and Parallelism

no code implementations1 Apr 2021 Zixuan Jiang, Jiaqi Gu, Mingjie Liu, Keren Zhu, David Z. Pan

Machine learning frameworks adopt iterative optimizers to train neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.