Search Results for author: Walid Ahmed

Found 8 papers, 0 papers with code

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

no code implementations27 Jan 2024 Foozhan Ataiefard, Walid Ahmed, Habib Hajimolahoseini, Saina Asani, Farnoosh Javadi, Mohammad Hassanpour, Omar Mohamed Awad, Austin Wen, Kangling Liu, Yang Liu

Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model.

SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling

no code implementations25 Nov 2023 Habib Hajimolahoseini, Omar Mohamed Awad, Walid Ahmed, Austin Wen, Saina Asani, Mohammad Hassanpour, Farnoosh Javadi, Mehdi Ahmadi, Foozhan Ataiefard, Kangling Liu, Yang Liu

In this paper, we present SwiftLearn, a data-efficient approach to accelerate training of deep learning models using a subset of data samples selected during the warm-up stages of training.

GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values

no code implementations6 Nov 2023 Farnoosh Javadi, Walid Ahmed, Habib Hajimolahoseini, Foozhan Ataiefard, Mohammad Hassanpour, Saina Asani, Austin Wen, Omar Mohamed Awad, Kangling Liu, Yang Liu

We tested our method on ViT, which achieved an approximate 0. 3% increase in accuracy while reducing the model size by about 4% in the task of image classification.

Image Classification

Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition

no code implementations21 Sep 2023 Walid Ahmed, Habib Hajimolahoseini, Austin Wen, Yang Liu

Compression of a neural network can help in speeding up both the training and the inference of the network.

Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization

no code implementations7 Sep 2023 Habib Hajimolahoseini, Walid Ahmed, Yang Liu

Low Rank Decomposition (LRD) is a model compression technique applied to the weight tensors of deep learning models in order to reduce the number of trainable parameters and computational complexity.

Model Compression Quantization

Improving Resnet-9 Generalization Trained on Small Datasets

no code implementations7 Sep 2023 Omar Mohamed Awad, Habib Hajimolahoseini, Michael Lim, Gurpreet Gosal, Walid Ahmed, Yang Liu, Gordon Deng

This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training.

Image Classification

Ensemble-based Adaptive Single-shot Multi-box Detector

no code implementations17 Aug 2018 Viral Thakar, Walid Ahmed, Mohammad M Soltani, Jia Yuan Yu

This uses data to reduce the uncertainty in the selection of best aspect ratios for the default boxes and improves performance of SSD for datasets containing small and complex objects (e. g., equipments at construction sites).

Efficient Single-Shot Multibox Detector for Construction Site Monitoring

no code implementations17 Aug 2018 Viral Thakar, Himani Saini, Walid Ahmed, Mohammad M Soltani, Ahmed Aly, Jia Yuan Yu

Asset monitoring in construction sites is an intricate, manually intensive task, that can highly benefit from automated solutions engineered using deep neural networks.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.