Search Results for author: Tomasz Odrzygóźdź

Found 6 papers, 5 papers with code

Scaling Laws for Fine-Grained Mixture of Experts

1 code implementation12 Feb 2024 Jakub Krajewski, Jan Ludziejewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygóźdź, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur

Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget.

Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation

1 code implementation24 Oct 2023 Szymon Antoniak, Sebastian Jaszczur, Michał Krutul, Maciej Pióro, Jakub Krajewski, Jan Ludziejewski, Tomasz Odrzygóźdź, Marek Cygan

The operation of matching experts and tokens is discrete, which makes MoE models prone to issues like training instability and uneven expert utilization.

Language Modelling Large Language Model

Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

no code implementations22 May 2022 Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, Mateja Jamnik

Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8. 2\%$ of problems neither language models nor automated theorem provers are able to solve on their own.

Automated Theorem Proving

Subgoal Search For Complex Reasoning Tasks

1 code implementation NeurIPS 2021 Konrad Czechowski, Tomasz Odrzygóźdź, Marek Zbysiński, Michał Zawalski, Krzysztof Olejnik, Yuhuai Wu, Łukasz Kuciński, Piotr Miłoś

In this paper, we implement kSubS using a transformer-based subgoal module coupled with the classical best-first search framework.

Rubik's Cube

Cannot find the paper you are looking for? You can Submit a new open access paper.