Search Results for author: Muthian Sivathanu

Found 5 papers, 2 papers with code

Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads

no code implementations16 Feb 2022 Dharma Shukla, Muthian Sivathanu, Srinidhi Viswanatha, Bhargav Gulavani, Rimma Nehme, Amey Agrawal, Chen Chen, Nipun Kwatra, Ramachandran Ramjee, Pankaj Sharma, Atul Katiyar, Vipul Modi, Vaibhav Sharma, Abhishek Singh, Shreshth Singhal, Kaustubh Welankar, Lu Xun, Ravi Anupindi, Karthik Elangovan, Hasibur Rahman, Zhou Lin, Rahul Seetharaman, Cheng Xu, Eddie Ailijiang, Suresh Krishnappa, Mark Russinovich

At the heart of Singularity is a novel, workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance, across a global fleet of AI accelerators (e. g., GPUs, FPGAs).

Scheduling

LRTuner: A Learning Rate Tuner for Deep Neural Networks

1 code implementation ICML Workshop AutoML 2021 Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, Muthian Sivathanu

For example on ImageNet with Resnet-50, LRTuner shows up to 0. 2% absolute gains in test accuracy compared to the hand-tuned baseline schedule.

Unsupervised Clustering using Pseudo-semi-supervised Learning

no code implementations ICLR 2020 Divam Gupta, Ramachandran Ramjee, Nipun Kwatra, Muthian Sivathanu

In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance.

Clustering

AutoLR: A Method for Automatic Tuning of Learning Rate

no code implementations25 Sep 2019 Nipun Kwatra, V Thejas, Nikhil Iyer, Ramachandran Ramjee, Muthian Sivathanu

We compare favorably against state of the art learning rate schedules for the given dataset and models, including for ImageNet on Resnet-50, Cifar-10 on Resnet-18, and SQuAD fine-tuning on BERT.

Cannot find the paper you are looking for? You can Submit a new open access paper.