2 code implementations • 22 Apr 2024 • Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr
Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities.