Search Results for author: Akihiro Matsukawa

Found 4 papers, 4 papers with code

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

2 code implementations7 Jun 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan

To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods.

Improved Knowledge Distillation via Teacher Assistant

3 code implementations9 Feb 2019 Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, Hassan Ghasemzadeh

To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher.

Knowledge Distillation

Hybrid Models with Deep and Invertible Features

1 code implementation7 Feb 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i. e. a normalizing flow).

Probabilistic Deep Learning

Do Deep Generative Models Know What They Don't Know?

4 code implementations ICLR 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data.

Cannot find the paper you are looking for? You can Submit a new open access paper.