no code implementations • 16 May 2024 • Milan Bhan, Jean-Noel Vittaut, Nina Achache, Victor Legrand, Nicolas Chesneau, Annabelle Blangero, Juliette Murris, Marie-Jeanne Lesot
In this work, we propose to apply counterfactual generation methods from the eXplainable AI (XAI) field to target and mitigate textual toxicity.
no code implementations • 18 Mar 2024 • Natalia De La Calzada, Théo Alves Da Costa, Annabelle Blangero, Nicolas Chesneau
This research paper investigates public views on climate change and biodiversity loss by analyzing questions asked to the ClimateQ&A platform.
no code implementations • 27 Mar 2023 • Milan Bhan, Nina Achache, Victor Legrand, Annabelle Blangero, Nicolas Chesneau
A human-grounded experiment is conducted to evaluate and compare CLS-A to other interpretability methods.