Search Results for author: Michalis K. Titsias

Found 33 papers, 11 papers with code

Kalman Filter for Online Classification of Non-Stationary Data

no code implementations14 Jun 2023 Michalis K. Titsias, Alexandre Galashov, Amal Rannen-Triki, Razvan Pascanu, Yee Whye Teh, Jorg Bornschein

Non-stationarity over the linear predictor weights is modelled using a parameter drift transition density, parametrized by a coefficient that quantifies forgetting.

Classification Continual Learning +1

Personalized Federated Learning with Exact Stochastic Gradient Descent

no code implementations20 Feb 2022 Sotirios Nikoloutsopoulos, Iordanis Koutsopoulos, Michalis K. Titsias

At the final update, each client computes the joint gradient over both client-specific and common weights and returns the gradient of common parameters to the server.

Multi-class Classification Personalized Federated Learning

Gradient Estimation with Discrete Stein Operators

1 code implementation19 Feb 2022 Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey

Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems.

Double Control Variates for Gradient Estimation in Discrete Latent Variable Models

1 code implementation pproximateinference AABI Symposium 2022 Michalis K. Titsias, Jiaxin Shi

We introduce a variance reduction technique for score function estimators that makes use of double control variates.

Entropy-based adaptive Hamiltonian Monte Carlo

1 code implementation NeurIPS 2021 Marcel Hirt, Michalis K. Titsias, Petros Dellaportas

Hamiltonian Monte Carlo (HMC) is a popular Markov Chain Monte Carlo (MCMC) algorithm to sample from an unnormalized probability distribution.

Sequential Changepoint Detection in Neural Networks with Checkpoints

no code implementations6 Oct 2020 Michalis K. Titsias, Jakub Sygnowski, Yutian Chen

We introduce a framework for online changepoint detection and simultaneous model learning which is applicable to highly parametrized models, such as deep neural networks.

Continual Learning

Unbiased Gradient Estimation for Variational Auto-Encoders using Coupled Markov Chains

no code implementations5 Oct 2020 Francisco J. R. Ruiz, Michalis K. Titsias, Taylan Cemgil, Arnaud Doucet

The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture; one of them parameterizes the model's likelihood.

Information Theoretic Meta Learning with Gaussian Processes

no code implementations7 Sep 2020 Michalis K. Titsias, Francisco J. R. Ruiz, Sotirios Nikoloutsopoulos, Alexandre Galashov

We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.

Gaussian Processes Meta-Learning

Gradient-based Adaptive Markov Chain Monte Carlo

1 code implementation NeurIPS 2019 Michalis K. Titsias, Petros Dellaportas

We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets.

Sparse Orthogonal Variational Inference for Gaussian Processes

1 code implementation pproximateinference AABI Symposium 2019 Jiaxin Shi, Michalis K. Titsias, andriy mnih

We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points, which can lead to more scalable algorithms than previous methods.

Gaussian Processes Multi-class Classification +2

A Contrastive Divergence for Combining Variational Inference and MCMC

2 code implementations10 May 2019 Francisco J. R. Ruiz, Michalis K. Titsias

We develop a method to combine Markov chain Monte Carlo (MCMC) and variational inference (VI), leveraging the advantages of both inference approaches.

Stochastic Optimization Variational Inference

Functional Regularisation for Continual Learning with Gaussian Processes

1 code implementation ICLR 2020 Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh

We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network.

Bayesian Inference Continual Learning +2

Bayesian Transfer Reinforcement Learning with Prior Knowledge Rules

no code implementations30 Sep 2018 Michalis K. Titsias, Sotirios Nikoloutsopoulos

The resulting method is flexible and it can be easily incorporated to any standard off-policy and on-policy algorithms, such as those based on temporal differences and policy gradients.

reinforcement-learning Reinforcement Learning (RL) +1

Unbiased Implicit Variational Inference

1 code implementation6 Aug 2018 Michalis K. Titsias, Francisco J. R. Ruiz

We develop unbiased implicit variational inference (UIVI), a method that expands the applicability of variational inference by defining an expressive variational family.

regression Variational Inference

Fully Scalable Gaussian Processes using Subspace Inducing Inputs

no code implementations6 Jul 2018 Aristeidis Panos, Petros Dellaportas, Michalis K. Titsias

We introduce fully scalable Gaussian processes, an implementation scheme that tackles the problem of treating a high number of training instances together with high dimensional input data.

Extreme Multi-Label Classification Gaussian Processes +1

Learning Model Reparametrizations: Implicit Variational Inference by Fitting MCMC distributions

no code implementations4 Aug 2017 Michalis K. Titsias

We introduce a new algorithm for approximate inference that combines reparametrization, Markov chain Monte Carlo and variational methods.

Variational Inference

Augmented Ensemble MCMC sampling in Factorial Hidden Markov Models

no code implementations24 Mar 2017 Kaspar Märtens, Michalis K. Titsias, Christopher Yau

Bayesian inference for factorial hidden Markov models is challenging due to the exponentially sized latent variable space.

Bayesian Inference

Bayesian Boolean Matrix Factorisation

no code implementations ICML 2017 Tammo Rukat, Chris C. Holmes, Michalis K. Titsias, Christopher Yau

Boolean matrix factorisation aims to decompose a binary data matrix into an approximate Boolean product of two low rank, binary matrices: one containing meaningful patterns, the other quantifying how the observations can be expressed as a combination of these patterns.

Collaborative Filtering

Auxiliary gradient-based sampling algorithms

1 code implementation30 Oct 2016 Michalis K. Titsias, Omiros Papaspiliopoulos

We introduce a new family of MCMC samplers that combine auxiliary variables, Gibbs sampling and Taylor expansions of the target density.

Binary Classification

The Generalized Reparameterization Gradient

no code implementations NeurIPS 2016 Francisco J. R. Ruiz, Michalis K. Titsias, David M. Blei

The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective.

Variational Inference

One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities

no code implementations NeurIPS 2016 Michalis K. Titsias

The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems.

General Classification Language Modelling +2

Overdispersed Black-Box Variational Inference

no code implementations3 Mar 2016 Francisco J. R. Ruiz, Michalis K. Titsias, David M. Blei

Instead of taking samples from the variational distribution, we use importance sampling to take samples from an overdispersed distribution in the same exponential family as the variational approximation.

Variational Inference

Inference for determinantal point processes without spectral knowledge

no code implementations NeurIPS 2015 Rémi Bardenet, Michalis K. Titsias

DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of kernel $K$ through likelihood-based inference is not straightforward.

Point Processes Variational Inference

Local Expectation Gradients for Doubly Stochastic Variational Inference

no code implementations4 Mar 2015 Michalis K. Titsias

We introduce local expectation gradients which is a general purpose stochastic variational inference algorithm for constructing stochastic gradients through sampling from the variational distribution.

Variational Inference

Variational Inference for Uncertainty on the Inputs of Gaussian Process Models

no code implementations8 Sep 2014 Andreas C. Damianou, Michalis K. Titsias, Neil D. Lawrence

The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied.

Dimensionality Reduction Gaussian Processes +1

Statistical Inference in Hidden Markov Models using $k$-segment Constraints

no code implementations5 Nov 2013 Michalis K. Titsias, Christopher Yau, Christopher C. Holmes

Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data.

Variational Gaussian Process Dynamical Systems

no code implementations NeurIPS 2011 Andreas Damianou, Michalis K. Titsias, Neil D. Lawrence

Our work builds on recent variational approximations for Gaussian process latent variable models to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical prior in the latent space.

Dimensionality Reduction Time Series +1

The Infinite Gamma-Poisson Feature Model

no code implementations NeurIPS 2007 Michalis K. Titsias

This model can play the role of the prior in an nonparametric Bayesian learning scenario where both the latent features and the number of their occurrences are unknown.

Cannot find the paper you are looking for? You can Submit a new open access paper.