no code implementations • 1 Mar 2024 • Salah Ghamizi, Jun Cao, Aoxiang Ma, Pedro Rodriguez
PowerFlowMultiNet outperforms traditional methods and other deep learning approaches in terms of accuracy and computational speed.
no code implementations • 8 Nov 2023 • Thibault Simonetto, Salah Ghamizi, Antoine Desjardins, Maxime Cordy, Yves Le Traon
State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings.
no code implementations • 11 Sep 2023 • Salah Ghamizi, Maxime Cordy, Yuejun Guo, Mike Papadakis, And Yves Le Traon
To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results.
no code implementations • 24 May 2023 • Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.
1 code implementation • 6 Feb 2023 • Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon
While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.
no code implementations • 15 Dec 2022 • Salah Ghamizi, Maxime Cordy, Michail Papadakis, Yves Le Traon
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural Networks.
1 code implementation • 7 Feb 2022 • Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy
While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems.
no code implementations • 2 Dec 2021 • Thibault Simonetto, Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy, Yves Le Traon
We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints.
1 code implementation • 26 Oct 2021 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks.
no code implementations • 14 Nov 2019 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
The key idea of our method is to use deep neural networks for image classification and adversarial attacks to embed secret information within images.
1 code implementation • 9 Apr 2019 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
First, we model the variability of DNN architectures with a Feature Model (FM) that generalizes over existing architectures.