no code implementations • 4 May 2024 • Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Cote
While Large Language Models (LLMs) have demonstrated significant promise as agents in interactive tasks, their substantial computational requirements and restricted number of calls constrain their practical utility, especially in long-horizon interactive tasks such as decision-making or in scenarios involving continuous ongoing tasks.
no code implementations • 29 Sep 2021 • Maryam Hashemzadeh, Wesley Chung, Martha White
To enable better performance, we investigate the offline-online setting: The agent has access to a batch of data to train on but is also allowed to learn during the evaluation phase in an online manner.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Maryam Hashemzadeh, Greta Kaufeld, Martha White, Andrea E. Martin, Alona Fyshe
The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read.
no code implementations • 22 Oct 2017 • Maryam Hashemzadeh, Reshad Hosseini, Majid Nili Ahmadabadi
Generalization and faster learning in a subspace are due to many-to-one mapping of experiences from the full-space to each state in the subspace.