Search Results for author: Yichen Shen

Found 6 papers, 3 papers with code

Real-Time Uncertainty Estimation in Computer Vision via Uncertainty-Aware Distribution Distillation

no code implementations31 Jul 2020 Yichen Shen, Zhilu Zhang, Mert R. Sabuncu, Lin Sun

We propose a simple, easy-to-optimize distillation method for learning the conditional predictive distribution of a pre-trained dropout model for fast, sample-free uncertainty estimation in computer vision tasks.

Depth Estimation Semantic Segmentation +1

Migrating Knowledge between Physical Scenarios based on Artificial Neural Networks

no code implementations27 Aug 2018 Yurui Qu, Li Jing, Yichen Shen, Min Qiu, Marin Soljacic

First, we demonstrate that in predicting the transmission from multilayer photonic film, the relative error rate is reduced by 46. 8% (26. 5%) when the source data comes from 10-layer (8-layer) films and the target data comes from 8-layer (10-layer) films.

Multi-Task Learning

Nanophotonic Particle Simulation and Inverse Design Using Artificial Neural Networks

1 code implementation18 Oct 2017 John Peurifoy, Yichen Shen, Li Jing, Yi Yang, Fidel Cano-Renteria, Brendan Delacy, Max Tegmark, John D. Joannopoulos, Marin Soljacic

We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles.

Computational Physics Applied Physics Optics

Gated Orthogonal Recurrent Units: On Learning to Forget

1 code implementation8 Jun 2017 Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljačić, Yoshua Bengio

We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory.

Ranked #7 on Question Answering on bAbi (Accuracy (trained on 1k) metric)

Denoising Question Answering

Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs

4 code implementations ICML 2017 Li Jing, Yichen Shen, Tena Dubček, John Peurifoy, Scott Skirlo, Yann Lecun, Max Tegmark, Marin Soljačić

Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.

Permuted-MNIST

Cannot find the paper you are looking for? You can Submit a new open access paper.