no code implementations • 9 Apr 2024 • Radosław Nowak, Adam Małkowski, Daniel Cieślak, Piotr Sokół, Paweł Wawrzyński
Graph embeddings have emerged as a powerful tool for representing complex network structures in a low-dimensional space, enabling the use of efficient methods that employ the metric structure in the embedding space as a proxy for the topological structure of the data.
no code implementations • 8 Aug 2023 • Jakub Łyskawa, Paweł Wawrzyński
Reinforcement learning (RL) methods work in discrete time.
no code implementations • 28 Mar 2023 • Łukasz Lepak, Paweł Wawrzyński
In many countries, this balancing is done on the day-ahead (DA) energy markets.
no code implementations • 11 Nov 2022 • Michał Bortkiewicz, Jakub Łyskawa, Paweł Wawrzyński, Mateusz Ostaszewski, Artur Grudkowski, Tomasz Trzciński
In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level.
Hierarchical Reinforcement Learning reinforcement-learning +1
no code implementations • 30 Jul 2022 • Paweł Wawrzyński, Wojciech Masarczyk, Mateusz Ostaszewski
To that end, the dispersion should be tuned to assure a sufficiently high probability (densities) of the actions in the replay buffer and the modes of the distributions that generated them, yet this dispersion should not be higher.
no code implementations • 28 Jan 2022 • Adam Małkowski, Jakub Grzechociński, Paweł Wawrzyński
In this paper we address the above challenge with recursive neural networks - the encoder and the decoder.
no code implementations • 17 Jan 2022 • Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński
Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.
1 code implementation • 23 Jun 2021 • Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński
We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.
no code implementations • 28 May 2021 • Grzegorz Rypeść, Łukasz Lepak, Paweł Wawrzyński
A number of problems in the processing of sound and natural language, as well as in other areas, can be reduced to simultaneously reading an input sequence and writing an output sequence of generally different length.
1 code implementation • 28 May 2021 • Łukasz Neumann, Łukasz Lepak, Paweł Wawrzyński
It is based on updating the previous memory state with a deep transformation of the lagged state and the network input.
no code implementations • 8 Apr 2021 • Jakub Łyskawa, Paweł Wawrzyński
It is not feasible because it causes the controlled system to jerk, and does not ensure sufficient exploration since a~single action is not long enough to create a~significant experience that could be translated into policy improvement.
1 code implementation • 25 Nov 2020 • Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński
We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.
1 code implementation • 10 Sep 2020 • Marcin Szulc, Jakub Łyskawa, Paweł Wawrzyński
Consequently, an agent learns from experiments that are distributed over time and potentially give better clues to policy improvement.
no code implementations • 23 Jan 2020 • Karol Chęciński, Paweł Wawrzyński
We follow the line of research in which filters of convolutional neural layers are determined on the basis of a smaller number of trained parameters.