Search Results for author: Mahdi Nikdast

Found 19 papers, 0 papers with code

Silicon Photonic 2.5D Interposer Networks for Overcoming Communication Bottlenecks in Scale-out Machine Learning Hardware Accelerators

no code implementations7 Mar 2024 Febin Sunny, Ebadollah Taheri, Mahdi Nikdast, Sudeep Pasricha

Modern machine learning (ML) applications are becoming increasingly complex and monolithic (single chip) accelerator architectures cannot keep up with their energy efficiency and throughput demands.

Accelerating Neural Networks for Large Language Models and Graph Processing with Silicon Photonics

no code implementations12 Jan 2024 Salma Afifi, Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) and graph processing have emerged as transformative technologies for natural language processing (NLP), computer vision, and graph-structured data applications.

Analysis of Optical Loss and Crosstalk Noise in MZI-based Coherent Photonic Neural Networks

no code implementations7 Aug 2023 Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast

The proposed models can be applied to any SP-NN architecture with different configurations to analyze the effect of loss and crosstalk.

GHOST: A Graph Neural Network Accelerator using Silicon Photonics

no code implementations4 Jul 2023 Salma Afifi, Febin Sunny, Amin Shafiee, Mahdi Nikdast, Sudeep Pasricha

Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data.

Drug Discovery Graph Attention +1

TRON: Transformer Neural Network Acceleration with Non-Coherent Silicon Photonics

no code implementations22 Mar 2023 Salma Afifi, Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

Transformer neural networks are rapidly being integrated into state-of-the-art solutions for natural language processing (NLP) and computer vision.

Cross-Layer Design for AI Acceleration with Non-Coherent Optical Computing

no code implementations22 Mar 2023 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

Emerging AI applications such as ChatGPT, graph convolutional networks, and other deep neural networks require massive computational resources for training and inference.

Machine Learning Accelerators in 2.5D Chiplet Platforms with Silicon Photonics

no code implementations28 Jan 2023 Febin Sunny, Ebadollah Taheri, Mahdi Nikdast, Sudeep Pasricha

Domain-specific machine learning (ML) accelerators such as Google's TPU and Apple's Neural Engine now dominate CPUs and GPUs for energy-efficient ML processing.

RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon Photonics

no code implementations31 Aug 2022 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

Recurrent Neural Networks (RNNs) are used in applications that learn dependencies in data sequences, such as speech recognition, human activity recognition, and anomaly detection.

Anomaly Detection Human Activity Recognition +2

Characterizing Coherent Integrated Photonic Neural Networks under Imperfections

no code implementations22 Jul 2022 Sanmitra Banerjee, Mahdi Nikdast, Krishnendu Chakrabarty

Integrated photonic neural networks (IPNNs) are emerging as promising successors to conventional electronic AI accelerators as they offer substantial improvements in computing speed and energy efficiency.

Quantization

A Silicon Photonic Accelerator for Convolutional Neural Networks with Heterogeneous Quantization

no code implementations17 May 2022 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

Parameter quantization in convolutional neural networks (CNNs) can help generate efficient models with lower memory footprint and computational complexity.

Quantization

Characterization and Optimization of Integrated Silicon-Photonic Neural Networks under Fabrication-Process Variations

no code implementations19 Apr 2022 Asif Mirza, Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast

Simulation results for two example SPNNs of different scales under realistic and correlated FPVs indicate that the optimized MZIs can improve the inferencing accuracy by up to 93. 95% for the MNIST handwritten digit dataset -- considered as an example in this paper -- which corresponds to a <0. 5% accuracy loss compared to the variation-free case.

LoCI: An Analysis of the Impact of Optical Loss and Crosstalk Noise in Integrated Silicon-Photonic Neural Networks

no code implementations8 Apr 2022 Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast

Compared to electronic accelerators, integrated silicon-photonic neural networks (SP-NNs) promise higher speed and energy efficiency for emerging artificial-intelligence applications.

Pruning Coherent Integrated Photonic Neural Networks Using the Lottery Ticket Hypothesis

no code implementations14 Dec 2021 Sanmitra Banerjee, Mahdi Nikdast, Sudeep Pasricha, Krishnendu Chakrabarty

Singular-value-decomposition-based coherent integrated photonic neural networks (SC-IPNNs) have a large footprint, suffer from high static power consumption for training and inference, and cannot be pruned using conventional DNN pruning techniques.

CHAMP: Coherent Hardware-Aware Magnitude Pruning of Integrated Photonic Neural Networks

no code implementations11 Dec 2021 Sanmitra Banerjee, Mahdi Nikdast, Sudeep Pasricha, Krishnendu Chakrabarty

We propose a novel hardware-aware magnitude pruning technique for coherent photonic neural networks.

SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning

no code implementations9 Sep 2021 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha

Sparse neural networks can greatly facilitate the deployment of neural networks on resource-constrained platforms as they offer compact model sizes while retaining inference accuracy.

ROBIN: A Robust Optical Binary Neural Network Accelerator

no code implementations12 Jul 2021 Febin P. Sunny, Asif Mirza, Mahdi Nikdast, Sudeep Pasricha

However, mapping sophisticated neural network models on these accelerators still entails significant energy and memory consumption, along with high inference time overhead.

AdEle: An Adaptive Congestion-and-Energy-Aware Elevator Selection for Partially Connected 3D NoCs

no code implementations16 Feb 2021 Ebadollah Taheri, Ryan G. Kim, Mahdi Nikdast

By lowering the number of vertical connections in fully connected 3D networks-on-chip (NoCs), partially connected 3D NoCs (PC-3DNoCs) help alleviate reliability and fabrication issues.

Distributed, Parallel, and Cluster Computing Hardware Architecture Performance

CrossLight: A Cross-Layer Optimized Silicon Photonic Neural Network Accelerator

no code implementations13 Feb 2021 Febin Sunny, Asif Mirza, Mahdi Nikdast, Sudeep Pasricha

Domain-specific neural network accelerators have seen growing interest in recent years due to their improved energy efficiency and inference performance compared to CPUs and GPUs.

Modeling Silicon-Photonic Neural Networks under Uncertainties

no code implementations19 Dec 2020 Sanmitra Banerjee, Mahdi Nikdast, Krishnendu Chakrabarty

Silicon-photonic neural networks (SPNNs) offer substantial improvements in computing speed and energy efficiency compared to their digital electronic counterparts.

Cannot find the paper you are looking for? You can Submit a new open access paper.