Dynamic Bayesian Neural Networks

15 Apr 2020  ·  Lorenzo Rimella, Nick Whiteley ·

We define an evolving in time Bayesian neural network called a Hidden Markov neural network. The weights of a feed-forward neural network are modelled with the hidden states of a Hidden Markov model, whose observed process is given by the available data. A filtering algorithm is used to learn a variational approximation to the evolving in time posterior over the weights. Training is pursued through a sequential version of Bayes by Backprop Blundell et al. 2015, which is enriched with a stronger regularization technique called variational DropConnect. The experiments test variational DropConnect on MNIST and display the performance of Hidden Markov neural networks on time series.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods