no code implementations • 23 Sep 2019 • Huaqing Zhang, Xiaolin Cheng, Hui Zang, Dae Hoon Park
Compiler-level optimization of GEMM has significant performance impact on training and executing deep learning models.
no code implementations • 20 Nov 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang
However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.
no code implementations • 9 Nov 2018 • Dae Hoon Park, Yi Chang
To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.
no code implementations • ICLR 2019 • Chiu Man Ho, Dae Hoon Park, Wei Yang, Yi Chang
We propose sequenced-replacement sampling (SRS) for training deep neural networks.
no code implementations • 11 Mar 2018 • Kai Xu, Dae Hoon Park, Chang Yi, Charles Sutton
Interpreting black box classifiers, such as deep networks, allows an analyst to validate a classifier before it is deployed in a high-stakes setting.
no code implementations • ICLR 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.