Search Results for author: Masoud Daneshtalab

Found 19 papers, 4 papers with code

TrajectoryNAS: A Neural Architecture Search for Trajectory Prediction

no code implementations18 Mar 2024 Ali Asghar Sharifi, Ali Zoljodi, Masoud Daneshtalab

Through empirical studies, TrajectoryNAS demonstrates its effectiveness in enhancing the performance of autonomous driving systems, marking a significant advancement in the field. Experimental results reveal that TrajcetoryNAS yield a minimum of 4. 8 higger accuracy and 1. 1* lower latency over competing methods on the NuScenes dataset.

Autonomous Driving Neural Architecture Search +4

AdAM: Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN Accelerators

no code implementations5 Mar 2024 Mahdi Taheri, Natalia Cherezova, Samira Nazari, Ahsan Rafiq, Ali Azarpeyvand, Tara Ghasempouri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin

In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators.

SAFFIRA: a Framework for Assessing the Reliability of Systolic-Array-Based DNN Accelerators

no code implementations5 Mar 2024 Mahdi Taheri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin, Salvatore Pappalardo, Paul Jimenez, Bastien Deveautour, Alberto Bosio

Systolic array has emerged as a prominent architecture for Deep Neural Network (DNN) hardware accelerators, providing high-throughput and low-latency performance essential for deploying DNNs across diverse applications.

Exploration of Activation Fault Reliability in Quantized Systolic Array-Based DNN Accelerators

no code implementations17 Jan 2024 Mahdi Taheri, Natalia Cherezova, Mohammad Saeed Ansari, Maksim Jenihhin, Ali Mahani, Masoud Daneshtalab, Jaan Raik

The stringent requirements for the Deep Neural Networks (DNNs) accelerator's reliability stand along with the need for reducing the computational burden on the hardware platforms, i. e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators.

Quantization

Contrastive Learning for Lane Detection via Cross-Similarity

1 code implementation16 Aug 2023 Ali Zoljodi, Sadegh Abadijou, Mina Alibeigi, Masoud Daneshtalab

CLLD is a novel multitask contrastive learning that trains lane detection approaches to detect lane markings even in low visible situations by integrating local feature contrastive learning (CL) with our new proposed operation cross-similarity.

Contrastive Learning Lane Detection +1

FARMUR: Fair Adversarial Retraining to Mitigate Unfairness in Robustness

no code implementations ADBIS 2023 2023 Seyed Ali Mousavi, Hamid Mousavi2, Masoud Daneshtalab

Finally, we proposed a fair adversarial retraining method (FARMUR) to mitigate unfairness in robustness that retrains the DNN models based on vulnerable and robust sub-partitions.

Decision Making Fairness

Enhancing Fault Resilience of QNNs by Selective Neuron Splitting

no code implementations16 Jun 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

Thereafter, a novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.

APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors

no code implementations31 May 2023 Mahdi Taheri, Mohammad Hasan Ahmadilivani, Maksim Jenihhin, Masoud Daneshtalab, Jaan Raik

Nowadays, the extensive exploitation of Deep Neural Networks (DNNs) in safety-critical applications raises new reliability concerns.

A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks

no code implementations9 May 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

Through this SLR, three kinds of methods for reliability assessment of DNNs are identified including Fault Injection (FI), Analytical, and Hybrid methods.

DeepAxe: A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators

no code implementations14 Mar 2023 Mahdi Taheri, Mohammad Riazati, Mohammad Hasan Ahmadilivani, Maksim Jenihhin, Masoud Daneshtalab, Jaan Raik, Mikael Sjodin, Bjorn Lisper

The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements.

DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability Assessment

no code implementations13 Mar 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

In this work, we propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor that provides vulnerability value ranges for DNN neurons' outputs.

Accurate Detection of Paroxysmal Atrial Fibrillation with Certified-GAN and Neural Architecture Search

no code implementations17 Jan 2023 Mehdi Asadi, Fatemeh Poursalim, Mohammad Loni, Masoud Daneshtalab, Mikael Sjödin, Arash Gharehbaghi

This paper presents a novel machine learning framework for detecting Paroxysmal Atrial Fibrillation (PxAF), a pathological characteristic of Electrocardiogram (ECG) that can lead to fatal conditions such as heart attack.

Generative Adversarial Network Neural Architecture Search

GTFLAT: Game Theory Based Add-On For Empowering Federated Learning Aggregation Techniques

1 code implementation8 Dec 2022 Hamidreza Mahini, Hamid Mousavi, Masoud Daneshtalab

GTFLAT, as a game theory-based add-on, addresses an important research question: How can a federated learning algorithm achieve better performance and training efficiency by setting more effective adaptive weights for averaging in the model aggregation phase?

Federated Learning

DASS: Differentiable Architecture Search for Sparse neural networks

1 code implementation14 Jul 2022 Hamid Mousavi, Mohammad Loni, Mina Alibeigi, Masoud Daneshtalab

In this paper, we propose a new method to search for sparsity-friendly neural architectures.

Network Pruning

TAS: Ternarized Neural Architecture Search for Resource-Constrained Edge Devices

1 code implementation Design, Automation and Test in Europe Conference (DATE) 2022 Mohammad Loni, Hamid Mousavi, Mohammad Riazati, Masoud Daneshtalab, and Mikael Sjodin

This paper proposes TAS, a framework that drastically reduces the accuracy gap between TNNs and their full-precision counterparts by integrating quantization into the network design.

Neural Architecture Search Quantization

Computing with hardware neurons: spiking or classical? Perspectives of applied Spiking Neural Networks from the hardware side

no code implementations5 Feb 2016 Sergei Dytckov, Masoud Daneshtalab

If spike-driven applications, minimizing an amount of spikes, are developed, spiking neural systems may reach the energy efficiency level of classical neural systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.