1 code implementation • 17 Apr 2023 • Junyao Wang, Arnav Vaibhav Malawade, JunHong Zhou, Shih-Yuan Yu, Mohammad Abdullah Al Faruque
Effectively capturing intricate interactions among road users is of critical importance to achieving safe navigation for autonomous vehicles.
2 code implementations • 11 Nov 2021 • Arnav V. Malawade, Shih-Yuan Yu, Brandon Hsu, Deepan Muthirayan, Pramod P. Khargonekar, Mohammad A. Al Faruque
Finally, we demonstrate that sg2vec performs inference 9. 3x faster with an 88. 0% smaller model, 32. 4% less power, and 92. 8% less energy than the state-of-the-art method on the industry-standard Nvidia DRIVE PX 2 platform, making it more suitable for implementation on the edge.
no code implementations • 17 Sep 2021 • Trier Mortlock, Deepan Muthirayan, Shih-Yuan Yu, Pramod P. Khargonekar, Mohammad A. Al Faruque
In this paper, we detail the cognitive digital twin as the next stage of advancement of a digital twin that will help realize the vision of Industry 4. 0.
1 code implementation • 2 Sep 2021 • Arnav Vaibhav Malawade, Shih-Yuan Yu, Brandon Hsu, Harsimrat Kaeley, Anurag Karra, Mohammad Abdullah Al Faruque
The goal of roadscene2vec is to enable research into the applications and capabilities of road scene-graphs by providing tools for generating scene-graphs, graph learning models to generate spatio-temporal scene-graph embeddings, and tools for visualizing and analyzing scene-graph-based methodologies.
1 code implementation • 26 Jul 2021 • Shih-Yuan Yu, Rozhin Yasaei, Qingrong Zhou, Tommy Nguyen, Mohammad Abdullah Al Faruque
To attract more attention, we propose HW2VEC, an open-source graph learning tool that lowers the threshold for newcomers to research hardware security applications with graphs.
no code implementations • 19 Jul 2021 • Rozhin Yasaei, Shih-Yuan Yu, Emad Kasaeyan Naeini, Mohammad Abdullah Al Faruque
In this work, we propose a novel methodology, GNN4IP, to assess similarities between circuits and detect IP piracy.
3 code implementations • 31 Aug 2020 • Shih-Yuan Yu, Arnav V. Malawade, Deepan Muthirayan, Pramod P. Khargonekar, Mohammad A. Al Faruque
Finally, we demonstrate that the use of spatial and temporal attention layers improves our model's performance by 2. 7% and 0. 7% respectively, and increases its explainability.