Search Results for author: Luca Demetrio

Found 17 papers, 9 papers with code

Living-off-The-Land Reverse-Shell Detection by Informed Data Augmentation

no code implementations28 Feb 2024 Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli

The living-off-the-land (LOTL) offensive methodologies rely on the perpetration of malicious actions through chains of commands executed by legitimate applications, identifiable exclusively by analysis of system logs.

Data Augmentation

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates

no code implementations27 Feb 2024 Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli

In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.

Adversarial Robustness regression

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors

1 code implementation4 Oct 2023 Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio

Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage.

Nebula: Self-Attention for Dynamic Malware Analysis

1 code implementation19 Sep 2023 Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli

Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment, and storing their actions in log reports.

Malware Analysis Malware Detection

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

no code implementations13 Sep 2023 Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.

Object Recognition

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning

no code implementations9 Aug 2023 Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio

To overcome these issues, we design a robust machine learning model, named AdvModSec, which uses the CRS rules as input features, and it is trained to detect adversarial SQLi attacks.

Adversarial Robustness

Explaining Machine Learning DGA Detectors from DNS Traffic Data

no code implementations10 Aug 2022 Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio

One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker.

Decision Making

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

no code implementations12 Jul 2022 Luca Demetrio, Battista Biggio, Fabio Roli

While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different application contexts.

BIG-bench Machine Learning Malware Detection

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors

1 code implementation26 May 2022 Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.

Autonomous Driving Object +2

Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware

no code implementations ICML Workshop AML 2021 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli

Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial EXEmples, i. e., malware samples carefully manipulated to evade detection.

Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection

2 code implementations17 Aug 2020 Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli

Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes.

BIG-bench Machine Learning Malware Detection

Functionality-preserving Black-box Optimization of Adversarial Windows Malware

2 code implementations30 Mar 2020 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando

Windows malware detectors based on machine learning are vulnerable to adversarial examples, even if the attacker is only given black-box query access to the model.

Cryptography and Security

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries

2 code implementations11 Jan 2019 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando

Based on this finding, we propose a novel attack algorithm that generates adversarial malware binaries by only changing few tens of bytes in the file header.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.