Search Results for author: Angeliki Giannou

Found 7 papers, 1 papers with code

How Well Can Transformers Emulate In-context Newton's Method?

no code implementations5 Mar 2024 Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, Jason D. Lee

Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression.

In-Context Learning regression

Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

no code implementations28 Jun 2023 Emmanouil-Vasileios Vlatakis-Gkaragkounis, Angeliki Giannou, Yudong Chen, Qiaomin Xie

Our work endeavors to elucidate and quantify the probabilistic structures intrinsic to these algorithms.

The Expressive Power of Tuning Only the Normalization Layers

no code implementations15 Feb 2023 Angeliki Giannou, Shashank Rajput, Dimitris Papailiopoulos

Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks.

Looped Transformers as Programmable Computers

1 code implementation30 Jan 2023 Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, Dimitris Papailiopoulos

We present a framework for using transformer networks as universal computers by programming them with specific weights and placing them in a loop.

In-Context Learning

On the convergence of policy gradient methods to Nash equilibria in general stochastic games

no code implementations17 Oct 2022 Angeliki Giannou, Kyriakos Lotidis, Panayotis Mertikopoulos, Emmanouil-Vasileios Vlatakis-Gkaragkounis

Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner.

Policy Gradient Methods

Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information

no code implementations12 Jan 2021 Angeliki Giannou, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Panayotis Mertikopoulos

This equivalence extends existing continuous-time versions of the folk theorem of evolutionary game theory to a bona fide algorithmic learning setting, and it provides a clear refinement criterion for the prediction of the day-to-day behavior of no-regret learning in games

Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.