NeurIPS 2018

Adversarial Logit Pairing

NeurIPS 2018 tensorflow/models

In this paper, we develop improved techniques for defending against adversarial examples at scale.

Pelee: A Real-Time Object Detection System on Mobile Devices

NeurIPS 2018 Robert-JunWang/Pelee

In this study, we propose an efficient architecture named PeleeNet, which is built with conventional convolution instead.


Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

NeurIPS 2018 nbansal90/Can-we-Gain-More-from-Orthogonality

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

PointCNN: Convolution On $\mathcal{X}$-Transformed Points

NeurIPS 2018 chinakook/PointCNN.MX

The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN.

Discretely Relaxing Continuous Variables for tractable Variational Inference

NeurIPS 2018 treforevans/direct

We explore a new research direction in Bayesian variational inference with discrete latent variable priors where we exploit Kronecker matrix algebra for efficient and exact computations of the evidence lower bound (ELBO).

Learning long-range spatial dependencies with horizontal gated-recurrent units

NeurIPS 2018 serre-lab/hgru_share

As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching -- and sometimes even surpassing -- human accuracy on a variety of visual recognition tasks.


Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with $β$-Divergences

NeurIPS 2018 alan-turing-institute/bocpdms

The resulting inference procedure is doubly robust for both the parameter and the changepoint (CP) posterior, with linear time and constant space complexity.


A Simple Cache Model for Image Recognition

NeurIPS 2018 eminorhan/simple-cache

We propose to extract this extra class-relevant information using a simple key-value cache memory to improve the classification performance of the model at test time.


Computing Kantorovich-Wasserstein Distances on $d$-dimensional histograms using $(d+1)$-partite graphs

NeurIPS 2018 stegua/dpartion-nips2018

This paper presents a novel method to compute the exact Kantorovich-Wasserstein distance between a pair of $d$-dimensional histograms having $n$ bins each.

Ridge Regression and Provable Deterministic Ridge Leverage Score Sampling

NeurIPS 2018 srmcc/deterministic-ridge-leverage-sampling

We also show that under the assumption of power-law decay of ridge leverage scores, this deterministic algorithm is provably as accurate as randomized algorithms.