no code implementations • 29 Sep 2021 • Maryam Hashemzadeh, Wesley Chung, Martha White
To enable better performance, we investigate the offline-online setting: The agent has access to a batch of data to train on but is also allowed to learn during the evaluation phase in an online manner.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Maryam Hashemzadeh, Greta Kaufeld, Martha White, Andrea E. Martin, Alona Fyshe
The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read.
no code implementations • 22 Oct 2017 • Maryam Hashemzadeh, Reshad Hosseini, Majid Nili Ahmadabadi
Generalization and faster learning in a subspace are due to many-to-one mapping of experiences from the full-space to each state in the subspace.