Search Results for author: Andrea Asperti

Found 15 papers, 13 papers with code

Wind speed super-resolution and validation: from ERA5 to CERRA via diffusion models

1 code implementation27 Jan 2024 Fabio Merizzi, Andrea Asperti, Stefano Colamonaco

By leveraging the lower resolution ERA5 dataset, which provides boundary conditions for CERRA, we approach this as a super-resolution task.

Management Super-Resolution

Precipitation nowcasting with generative diffusion models

1 code implementation13 Aug 2023 Andrea Asperti, Fabio Merizzi, Alberto Paparella, Giorgio Pedrazzi, Matteo Angelinelli, Stefano Colamonaco

This approach, in comparison to recent deep learning models, substantially outperformed them in terms of overall performance.

Denoising Weather Forecasting

Head Rotation in Denoising Diffusion Models

1 code implementation11 Aug 2023 Andrea Asperti, Gabriele Colasuonno, Antonio Guerra

Denoising Diffusion Models (DDM) are emerging as the cutting-edge technology in the realm of deep generative modeling, challenging the dominance of Generative Adversarial Networks.

Denoising Face Generation

Image Embedding for Denoising Generative Models

1 code implementation30 Dec 2022 Andrea Asperti, Davide Evangelista, Samuele Marro, Fabio Merizzi

Denoising Diffusion models are gaining increasing popularity in the field of generative modeling for several reasons, including the simple and stable training, the excellent generative quality, and the solid probabilistic foundation.

Denoising Image Generation

Comparing the latent space of generative models

1 code implementation14 Jul 2022 Andrea Asperti, Valerio Tonelli

Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data.

MicroRacer: a didactic environment for Deep Reinforcement Learning

1 code implementation20 Mar 2022 Andrea Asperti, Marco Del Brutto

MicroRacer is a simple, open source environment inspired by car racing especially meant for the didactics of Deep Reinforcement Learning.

Car Racing reinforcement-learning +1

Enhancing variational generation through self-decomposition

1 code implementation6 Feb 2022 Andrea Asperti, Laura Bugo, Daniele Filippini

In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output $\hat{x}$ is obtained as a weighted sum $\sigma \odot \hat{x_1} + (1-\sigma) \odot \hat{x_2}$ of two generated images $\hat{x_1},\hat{x_2}$, and $\sigma$ is a {\em learned} compositional map.

Dissecting FLOPs along input dimensions for GreenAI cost estimations

1 code implementation26 Jul 2021 Andrea Asperti, Davide Evangelista, Moreno Marzolla

The term GreenAI refers to a novel approach to Deep Learning, that is more aware of the ecological impact and the computational efficiency of its methods.

Computational Efficiency

A survey on Variational Autoencoders from a GreenAI perspective

1 code implementation1 Mar 2021 Andrea Asperti, D. Evangelista, E. Loli Piccolomini

Variational AutoEncoders (VAEs) are powerful generative models that merge elements from statistics and information theory with the flexibility offered by deep neural networks to efficiently solve the generation problem for high dimensional data.

Representation Learning

Syllabification of the Divine Comedy

1 code implementation26 Oct 2020 Andrea Asperti, Stefano Dal Bianco

We jointly provide an online vocabulary containing, for each word, information about its syllabification, the location of the tonic accent, and the aforementioned synalephe propensity, on the left and right sides.

Clustering

Variance Loss in Variational Autoencoders

1 code implementation23 Feb 2020 Andrea Asperti

The minor variance creates a mismatch between the actual distribution of latent variables and those generated by the second VAE, that hinders the beneficial effects of the second stage.

Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders

2 code implementations18 Feb 2020 Andrea Asperti, Matteo Trentin

In the loss function of Variational Autoencoders there is a well known tension between two components: the reconstruction loss, improving the quality of the resulting images, and the Kullback-Leibler divergence, acting as a regularizer of the latent space.

Sparsity in Variational Autoencoders

no code implementations18 Dec 2018 Andrea Asperti

Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse.

The Effectiveness of Data Augmentation for Detection of Gastrointestinal Diseases from Endoscopical Images

no code implementations11 Dec 2017 Andrea Asperti, Claudio Mastronardo

The lack, due to privacy concerns, of large public databases of medical pathologies is a well-known and major problem, substantially hindering the application of deep learning techniques in this field.

Data Augmentation General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.