1 code implementation • 19 Jan 2024 • Adib Hasan, Ileana Rugina, Alex Wang
Large Language Models (LLMs) are susceptible to `jailbreaking' prompts, which can induce the generation of harmful content.
no code implementations • 22 Dec 2021 • Ileana Rugina, Rumen Dangovski, Mark Veillette, Pooya Khorrami, Brian Cheung, Olga Simek, Marin Soljačić
In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.
1 code implementation • 20 Nov 2020 • Ileana Rugina, Rumen Dangovski, Li Jing, Preslav Nakov, Marin Soljačić
The attention mechanism is a key component of the neural revolution in Natural Language Processing (NLP).