1 code implementation • 7 Apr 2023 • Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan Arik, Somesh Jha, Tomas Pfister
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain while increasing accuracy and coverage.
no code implementations • 3 Mar 2022 • Chun-Hao Chang, Jinsung Yoon, Sercan Arik, Madeleine Udell, Tomas Pfister
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
1 code implementation • 4 Feb 2022 • Sana Tonekaboni, Chun-Liang Li, Sercan Arik, Anna Goldenberg, Tomas Pfister
Learning representations that capture the factors contributing to this variability enables a better understanding of the data via its underlying generative process and improves performance on downstream machine learning tasks.
no code implementations • ICLR 2020 • Chen Xing, Sercan Arik, Zizhao Zhang, Tomas Pfister
To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model.
no code implementations • 25 Sep 2019 • Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister
Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.
no code implementations • 7 Jul 2019 • Yanqi Zhou, Peng Wang, Sercan Arik, Haonan Yu, Syed Zawad, Feng Yan, Greg Diamos
In this paper, we propose Efficient Progressive Neural Architecture Search (EPNAS), a neural architecture search (NAS) that efficiently handles large search space through a novel progressive search policy with performance prediction based on REINFORCE~\cite{Williams. 1992. PG}.
no code implementations • ICLR 2018 • Yanqi Zhou, Wei Ping, Sercan Arik, Kainan Peng, Greg Diamos
This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation.
1 code implementation • NeurIPS 2017 • Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou
We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1.
no code implementations • 3 Jun 2014 • Sercan Arik, Sukru Burc Eryilmaz, Adam Goldberg
In this work, we apply machine learning techniques to address automated stock picking, while using a larger number of financial parameters for individual companies than the previous studies.