no code implementations • 22 Dec 2023 • Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
Converting deep learning models between frameworks is a common step to maximize model compatibility across devices and leverage optimization features that may be exclusively provided in one deep learning framework.
no code implementations • 10 Jun 2023 • Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
To mitigate such errors, we present a novel approach towards fault localization and repair of buggy deep learning framework conversions, focusing on pre-trained image recognition models.
1 code implementation • 5 Jun 2023 • Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
Owing to the increased use of image recognition tasks in safety-critical applications like autonomous driving and medical imaging, it is imperative to assess their robustness to changes in the computational environment, as the impact of parameters like deep learning frameworks, compiler optimizations, and hardware devices on model performance and correctness is not yet well understood.
1 code implementation • 2 Jun 2023 • Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
On top of that, AI methods such as Deep Neural Networks (DNNs) are utilized to perform demanding, resource-intensive and even safety-critical tasks, and in order to effectively increase the performance of the DNN models deployed, a variety of Machine Learning (ML) compilers have been developed, allowing compatibility of DNNs with a variety of hardware acceleration devices, such as GPUs and TPUs.
no code implementations • 1 Nov 2022 • Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
On the other hand, model inference time was affected by all environment parameters with changes in hardware device having the most effect.