no code implementations • 19 Dec 2021 • Seo Jin Park, Joshua Fried, Sunghyun Kim, Mohammad Alizadeh, Adam Belay
As emerging deep neural network (DNN) models continue to grow in size, using large GPU clusters to train DNNs is becoming an essential requirement to achieving acceptable training times.
no code implementations • 25 Sep 2019 • Sunghyun Kim, Minje Jang, Changho Suh
As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios.
no code implementations • NeurIPS 2017 • Minje Jang, Sunghyun Kim, Changho Suh, Sewoong Oh
As our result, we characterize the minimax optimality on the sample size for top-K ranking.
no code implementations • 14 Mar 2016 • Minje Jang, Sunghyun Kim, Changho Suh, Sewoong Oh
First, in a general comparison model where item pairs to compare are given a priori, we attain an upper and lower bound on the sample size for reliable recovery of the top-$K$ ranked items.