1 code implementation • 27 Jan 2024 • Fabio Merizzi, Andrea Asperti, Stefano Colamonaco
By leveraging the lower resolution ERA5 dataset, which provides boundary conditions for CERRA, we approach this as a super-resolution task.
1 code implementation • 13 Aug 2023 • Andrea Asperti, Fabio Merizzi, Alberto Paparella, Giorgio Pedrazzi, Matteo Angelinelli, Stefano Colamonaco
This approach, in comparison to recent deep learning models, substantially outperformed them in terms of overall performance.
1 code implementation • 11 Aug 2023 • Andrea Asperti, Gabriele Colasuonno, Antonio Guerra
Denoising Diffusion Models (DDM) are emerging as the cutting-edge technology in the realm of deep generative modeling, challenging the dominance of Generative Adversarial Networks.
1 code implementation • 30 Dec 2022 • Andrea Asperti, Davide Evangelista, Samuele Marro, Fabio Merizzi
Denoising Diffusion models are gaining increasing popularity in the field of generative modeling for several reasons, including the simple and stable training, the excellent generative quality, and the solid probabilistic foundation.
1 code implementation • 14 Jul 2022 • Andrea Asperti, Valerio Tonelli
Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data.
1 code implementation • 20 Mar 2022 • Andrea Asperti, Marco Del Brutto
MicroRacer is a simple, open source environment inspired by car racing especially meant for the didactics of Deep Reinforcement Learning.
1 code implementation • 6 Feb 2022 • Andrea Asperti, Laura Bugo, Daniele Filippini
In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output $\hat{x}$ is obtained as a weighted sum $\sigma \odot \hat{x_1} + (1-\sigma) \odot \hat{x_2}$ of two generated images $\hat{x_1},\hat{x_2}$, and $\sigma$ is a {\em learned} compositional map.
1 code implementation • 26 Jul 2021 • Andrea Asperti, Davide Evangelista, Moreno Marzolla
The term GreenAI refers to a novel approach to Deep Learning, that is more aware of the ecological impact and the computational efficiency of its methods.
1 code implementation • 1 Mar 2021 • Andrea Asperti, D. Evangelista, E. Loli Piccolomini
Variational AutoEncoders (VAEs) are powerful generative models that merge elements from statistics and information theory with the flexibility offered by deep neural networks to efficiently solve the generation problem for high dimensional data.
1 code implementation • 26 Oct 2020 • Andrea Asperti, Stefano Dal Bianco
We jointly provide an online vocabulary containing, for each word, information about its syllabification, the location of the tonic accent, and the aforementioned synalephe propensity, on the left and right sides.
1 code implementation • 23 Feb 2020 • Andrea Asperti
The minor variance creates a mismatch between the actual distribution of latent variables and those generated by the second VAE, that hinders the beneficial effects of the second stage.
2 code implementations • 18 Feb 2020 • Andrea Asperti, Matteo Trentin
In the loss function of Variational Autoencoders there is a well known tension between two components: the reconstruction loss, improving the quality of the resulting images, and the Kullback-Leibler divergence, acting as a regularizer of the latent space.
no code implementations • 18 Dec 2018 • Andrea Asperti
Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse.
1 code implementation • 23 Apr 2018 • Andrea Asperti, Daniele Cortesi, Francesco Sovrano
Rogue is a famous dungeon-crawling video-game of the 80ies, the ancestor of its gender.
no code implementations • 11 Dec 2017 • Andrea Asperti, Claudio Mastronardo
The lack, due to privacy concerns, of large public databases of medical pathologies is a well-known and major problem, substantially hindering the application of deep learning techniques in this field.