Search Results for author: Dae Hoon Park

Found 7 papers, 0 papers with code

Compiler-Level Matrix Multiplication Optimization for Deep Learning

no code implementations23 Sep 2019 Huaqing Zhang, Xiaolin Cheng, Hui Zang, Dae Hoon Park

Compiler-level optimization of GEMM has significant performance impact on training and executing deep learning models.

Gradient-Coherent Strong Regularization for Deep Neural Networks

no code implementations20 Nov 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang

However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.

L2 Regularization

Adversarial Sampling and Training for Semi-Supervised Information Retrieval

no code implementations9 Nov 2018 Dae Hoon Park, Yi Chang

To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.

Information Retrieval Question Answering +1

Sequenced-Replacement Sampling for Deep Learning

no code implementations ICLR 2019 Chiu Man Ho, Dae Hoon Park, Wei Yang, Yi Chang

We propose sequenced-replacement sampling (SRS) for training deep neural networks.

Interpreting Deep Classifier by Visual Distillation of Dark Knowledge

no code implementations11 Mar 2018 Kai Xu, Dae Hoon Park, Chang Yi, Charles Sutton

Interpreting black box classifiers, such as deep networks, allows an analyst to validate a classifier before it is deployed in a high-stakes setting.

Dimensionality Reduction Model Compression

Achieving Strong Regularization for Deep Neural Networks

no code implementations ICLR 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang

L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.

L2 Regularization

Cannot find the paper you are looking for? You can Submit a new open access paper.