no code implementations • 3 May 2024 • Yaoyiran Li, Xiang Zhai, Moustafa Alzantot, Keyi Yu, Ivan Vulić, Anna Korhonen, Mohamed Hammad
Building upon the success of Large Language Models (LLMs) in a variety of tasks, researchers have recently explored using LLMs that are pretrained on vast corpora of text for sequential recommendation.
no code implementations • 9 Jun 2020 • Rohan Thavarajah, Xiang Zhai, Zheren Ma, David Castineira
The network can be trained within minutes using limited training data and achieve accuracy that scales desirably with the amount of training data supplied.