no code implementations • 1 May 2024 • Zhenning Yang, Ryan Krawec, Liang-Yuan Wu
As the deployment of NLP systems in critical applications grows, ensuring the robustness of large language models (LLMs) against adversarial attacks becomes increasingly important.