Search Results for author: Edoardo Mosca

Found 9 papers, 2 papers with code

Detecting Word-Level Adversarial Text Attacks via SHapley Additive exPlanations

no code implementations RepL4NLP (ACL) 2022 Edoardo Mosca, Lukas Huber, Marc Alexander Kühn, Georg Groh

State-of-the-art machine learning models are prone to adversarial attacks”:" Maliciously crafted inputs to fool the model into making a wrong prediction, often with high confidence.

Adversarial Text

SHAP-Based Explanation Methods: A Review for NLP Interpretability

no code implementations COLING 2022 Edoardo Mosca, Ferenc Szigeti, Stella Tragianni, Daniel Gallagher, Georg Groh

Model explanations are crucial for the transparent, safe, and trustworthy deployment of machine learning models.

Simpler becomes Harder: Do LLMs Exhibit a Coherent Behavior on Simplified Corpora?

2 code implementations10 Apr 2024 Miriam Anschütz, Edoardo Mosca, Georg Groh

Text simplification seeks to improve readability while retaining the original content and meaning.

Text Simplification

IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models

no code implementations6 Mar 2023 Edoardo Mosca, Daryna Dementieva, Tohid Ebrahim Ajdari, Maximilian Kummeth, Kirill Gringauz, Yutong Zhou, Georg Groh

Interpretability and human oversight are fundamental pillars of deploying complex NLP models into real-world applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.