no code implementations • 30 Jan 2023 • KrishnaTeja Killamsetty, Alexandre V. Evfimievski, Tejaswini Pedapati, Kiran Kate, Lucian Popa, Rishabh Iyer
Training deep networks and tuning hyperparameters on large datasets is computationally intensive.
1 code implementation • 15 Mar 2022 • KrishnaTeja Killamsetty, Guttu Sai Abhishek, Aakriti, Alexandre V. Evfimievski, Lucian Popa, Ganesh Ramakrishnan, Rishabh Iyer
Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster.
no code implementations • 19 May 2016 • Matthias Boehm, Alexandre V. Evfimievski, Niketan Pansare, Berthold Reinwald
Specification alternatives range from ML algorithms expressed in domain-specific languages (DSLs) with optimization for performance, to ML task (learning problem) specifications with optimization for performance and accuracy.