no code implementations • 22 Feb 2024 • Michael J. Ryan, William Held, Diyi Yang
Before being deployed for user-facing applications, developers align Large Language Models (LLMs) to user preferences through a variety of procedures, such as Reinforcement Learning From Human Feedback (RLHF) and Direct Preference Optimization (DPO).
1 code implementation • 25 May 2023 • Michael J. Ryan, Tarek Naous, Wei Xu
However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages.
Ranked #1 on Text Simplification on WikiLargeFR
1 code implementation • 23 May 2023 • Tarek Naous, Michael J. Ryan, Anton Lavrouk, Mohit Chandra, Wei Xu
We present a systematic study and comprehensive evaluation of large language models for automatic multilingual readability assessment.
no code implementations • 23 May 2023 • Tarek Naous, Michael J. Ryan, Alan Ritter, Wei Xu
In this paper, we show that multilingual and Arabic monolingual LMs exhibit bias towards entities associated with Western culture.
no code implementations • 2 Apr 2022 • Yeahia Sarker, Abdullah-Al-Zubaer Imran, Md Hafiz Ahamed, Ripon K. Chakrabortty, Michael J. Ryan, Sajal K. Das
To harvest maximum details for various receptive regions and high-quality synthetic images, \texttt{NLVAE} is introduced as a self-supervised strategy that reconstructs high-resolution images using disentangled information from the non-local neighbourhood.