Search Results for author: Salah Ghamizi

Found 11 papers, 4 papers with code

PowerFlowMultiNet: Multigraph Neural Networks for Unbalanced Three-Phase Distribution Systems

no code implementations1 Mar 2024 Salah Ghamizi, Jun Cao, Aoxiang Ma, Pedro Rodriguez

PowerFlowMultiNet outperforms traditional methods and other deep learning approaches in terms of accuracy and computational speed.

Graph Embedding

Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations

no code implementations11 Sep 2023 Salah Ghamizi, Maxime Cordy, Yuejun Guo, Mike Papadakis, And Yves Le Traon

To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results.

How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks

no code implementations24 May 2023 Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy

Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.

Adversarial Text

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

1 code implementation6 Feb 2023 Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon

While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.

Adversarial Robustness Data Augmentation +1

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

1 code implementation7 Feb 2022 Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy

While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems.

Adversarial Robustness Malware Detection +2

Adversarial Embedding: A robust and elusive Steganography and Watermarking technique

no code implementations14 Nov 2019 Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon

The key idea of our method is to use deep neural networks for image classification and adversarial attacks to embed secret information within images.

Adversarial Attack Image Classification +2

Automated Search for Configurations of Deep Neural Network Architectures

1 code implementation9 Apr 2019 Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon

First, we model the variability of DNN architectures with a Feature Model (FM) that generalizes over existing architectures.

Image Classification valid

Cannot find the paper you are looking for? You can Submit a new open access paper.