no code implementations • 25 Feb 2024 • Dan Zhao, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, Vijay Gadepally
In this paper, we study the aggregate effect of power-capping GPUs on GPU temperature and power draw at a research supercomputing center.
1 code implementation • 13 Oct 2023 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Finally, a brief description of each of the new accelerators that have been added in the survey this year is included.
no code implementations • 4 Oct 2023 • Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, Vijay Gadepally
Large language models (LLMs) have exploded in popularity due to their new generative capabilities that go far beyond prior state-of-the-art.
no code implementations • 28 Sep 2023 • Manish Sharma, Moitreya Chatterjee, Kuan-Chuan Peng, Suhas Lohit, Michael Jones
We first pretrain these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality to avoid over-fitting, while encouraging them to capture complementary cues from those trained only on the RGB modality.
no code implementations • 25 Sep 2023 • Zachariah Carmichael, Suhas Lohit, Anoop Cherian, Michael Jones, Walter Scheirer
Prototypical part neural networks (ProtoPartNNs), namely PROTOPNET and its derivatives, are an intrinsically interpretable approach to machine learning.
no code implementations • 27 Jan 2023 • Dan Zhao, Nathan C. Frey, Joseph McDonald, Matthew Hubbell, David Bestor, Michael Jones, Andrew Prout, Vijay Gadepally, Siddharth Samsi
applications, we are sure to face an ever-mounting energy footprint to sustain these computational budgets, data storage needs, and more.
no code implementations • 12 Sep 2022 • Matthew L. Weiss, Joseph McDonald, David Bestor, Charles Yee, Daniel Edelman, Michael Jones, Andrew Prout, Andrew Bowne, Lindsey McEvoy, Vijay Gadepally, Siddharth Samsi
Our best performing models achieve a classification accuracy greater than 95%, outperforming previous approaches to multi-channel time series classification with the MIT SuperCloud Dataset by 5%.
no code implementations • 12 Apr 2022 • Benny J. Tang, Qiqi Chen, Matthew L. Weiss, Nathan Frey, Joseph McDonald, David Bestor, Charles Yee, William Arcand, Chansup Byun, Daniel Edelman, Matthew Hubbell, Michael Jones, Jeremy Kepner, Anna Klein, Adam Michaleas, Peter Michaleas, Lauren Milechin, Julia Mullen, Andrew Prout, Albert Reuther, Antonio Rosa, Andrew Bowne, Lindsey McEvoy, Baolin Li, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi
We introduce a labelled dataset that can be used to develop new approaches to workload classification and present initial results based on existing approaches.
1 code implementation • 23 Mar 2022 • Tian Xie, Xinyi Yang, Angela S. Lin, Feihong Wu, Kazuma Hashimoto, Jin Qu, Young Mo Kang, Wenpeng Yin, Huan Wang, Semih Yavuz, Gang Wu, Michael Jones, Richard Socher, Yingbo Zhou, Wenhao Liu, Caiming Xiong
At the core of the struggle is the need to script every single turn of interactions between the bot and the human user.
no code implementations • 28 Jan 2022 • Nathan C. Frey, Baolin Li, Joseph McDonald, Dan Zhao, Michael Jones, David Bestor, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi
Deep learning (DL) workflows demand an ever-increasing budget of compute and energy in order to achieve outsized gains.
no code implementations • 20 Nov 2021 • Wenpeng Yin, Shelby Heinecke, Jia Li, Nitish Shirish Keskar, Michael Jones, Shouzhong Shi, Stanislav Georgiev, Kurt Milich, Joseph Esposito, Caiming Xiong
The distribution gap between training datasets and data encountered in production is well acknowledged.
1 code implementation • 18 Sep 2021 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Over the past several years, new machine learning accelerators were being announced and released every month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.
no code implementations • 25 Aug 2021 • Kaira Samuel, Vijay Gadepally, David Jacobs, Michael Jones, Kyle McAlpin, Kyle Palko, Ben Paulk, Sid Samsi, Ho Chit Siu, Charles Yee, Jeremy Kepner
The Maneuver Identification Challenge hosted at maneuver-id. mit. edu provides thousands of trajectories collected from pilots practicing in flight simulators, descriptions of maneuvers, and examples of these maneuvers performed by experienced pilots.
no code implementations • 4 Aug 2021 • Siddharth Samsi, Matthew L Weiss, David Bestor, Baolin Li, Michael Jones, Albert Reuther, Daniel Edelman, William Arcand, Chansup Byun, John Holodnack, Matthew Hubbell, Jeremy Kepner, Anna Klein, Joseph McDonald, Adam Michaleas, Peter Michaleas, Lauren Milechin, Julia Mullen, Charles Yee, Benjamin Price, Andrew Prout, Antonio Rosa, Allan Vanterpool, Lindsey McEvoy, Anson Cheng, Devesh Tiwari, Vijay Gadepally
In this paper we introduce the MIT Supercloud Dataset which aims to foster innovative AI/ML approaches to the analysis of large scale HPC and datacenter/cloud operations.
no code implementations • 28 Jul 2021 • Sai Saketh Rambhatla, Michael Jones, Rama Chellappa
Boosting is a method for finding a highly accurate hypothesis by linearly combining many ``weak" hypotheses, each of which may be only moderately accurate.
no code implementations • 7 Dec 2020 • Suhas Lohit, Michael Jones
Model compression methods are important to allow for easier deployment of deep learning models in compute, memory and energy-constrained environments such as mobile phones.
no code implementations • 5 Dec 2020 • Huan Wang, Suhas Lohit, Michael Jones, Yun Fu
We add loss terms for training the student that measure the dissimilarity between student and teacher outputs of the auxiliary classifiers.
no code implementations • 1 Sep 2020 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
New machine learning accelerators are being announced and released each month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.
no code implementations • 18 Aug 2020 • Siddharth Samsi, Michael Jones, Mark M. Veillette
In this paper we examine the compute, energy and time costs of training a UNet based deep neural network for the problem of predicting short term weather forecasts (called precipitation Nowcasting).
no code implementations • 18 Aug 2020 • Siddharth Samsi, Andrew Prout, Michael Jones, Andrew Kirby, Bill Arcand, Bill Bergeron, David Bestor, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Charles Yee, Albert Reuther, Jeremy Kepner
The large computational requirements for training deep models have necessitated the development of new methods for faster training.
no code implementations • 14 Jul 2020 • Andrew C. Kirby, Siddharth Samsi, Michael Jones, Albert Reuther, Jeremy Kepner, Vijay Gadepally
A Multigrid Full Approximation Storage algorithm for solving Deep Residual Networks is developed to enable neural network parallelized layer-wise training and concurrent computational kernel execution on GPUs.
1 code implementation • CVPR 2020 • Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng
In this paper, we present a novel framework for jointly predicting landmark locations, associated uncertainties of these predicted locations, and landmark visibilities.
Ranked #1 on Face Alignment on Menpo
no code implementations • 25 Mar 2020 • Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, Sid Samsi
The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems.
no code implementations • 18 Mar 2020 • Siddharth Samsi, Jeremy Kepner, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Albert Reuther, Steven Smith, William Song, Diane Staheli, Paul Monticciolo
In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations.
Distributed, Parallel, and Cluster Computing Performance
no code implementations • 2 Sep 2019 • Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Ryan Robinett, Sid Samsi
The Sparse DNN Challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment.
no code implementations • 29 Aug 2019 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications.
Performance B.8; C.4
no code implementations • 20 Aug 2019 • Andrew Prout, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther, Jeremy Kepner
Federated authentication can drastically reduce the overhead of basic account maintenance while simultaneously improving overall system security.
Distributed, Parallel, and Cluster Computing Cryptography and Security
no code implementations • 6 Jul 2019 • Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Michael Houle, Michael Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther
This work describes the design and performance optimization of an implementation of hierarchical associative arrays that reduces memory pressure and dramatically increases the update rate into an associative array.
no code implementations • 26 Mar 2019 • Esra Ataer-Cansizoglu, Michael Jones, Ziming Zhang, Alan Sullivan
Face super-resolution methods usually aim at producing visually appealing results rather than preserving distinctive features for further face identification.
no code implementations • 15 Feb 2019 • Bharathkumar Ramachandra, Michael Jones
Progress in video anomaly detection research is currently slowed by small datasets that lack a wide variety of activities as well as flawed evaluation criteria.
no code implementations • 14 Jul 2018 • Jeremy Kepner, Ron Brightwell, Alan Edelman, Vijay Gadepally, Hayden Jananthan, Michael Jones, Sam Madden, Peter Michaleas, Hamed Okhravi, Kevin Pedretti, Albert Reuther, Thomas Sterling, Mike Stonebraker
In this context, an operating system can be viewed as software that brokers and tracks the resources of the compute engines and is akin to a database management system.
Distributed, Parallel, and Cluster Computing Databases Operating Systems Performance
no code implementations • NAACL 2018 • Fatemeh Torabi Asr, Robert Zinkov, Michael Jones
Word embeddings obtained from neural network models such as Word2Vec Skipgram have become popular representations of word meaning and have been evaluated on a variety of word similarity and relatedness norming data.
no code implementations • 23 Aug 2017 • Siddharth Samsi, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Paul Monticciolo, Albert Reuther, Steven Smith, William Song, Diane Staheli, Jeremy Kepner
The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems.
Distributed, Parallel, and Cluster Computing Data Structures and Algorithms
no code implementations • CONLL 2017 • Fatemeh Torabi Asr, Michael Jones
Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from abstractive count-based models.
no code implementations • 12 Jul 2017 • Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther
Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities.
Performance Instrumentation and Methods for Astrophysics Distributed, Parallel, and Cluster Computing Computational Physics
no code implementations • CVPR 2016 • Bharat Singh, Tim K. Marks, Michael Jones, Oncel Tuzel, Ming Shao
We present a multi-stream bi-directional recurrent neural network for fine-grained action detection.
Action Recognition In Videos Fine-Grained Action Detection +2
no code implementations • CVPR 2015 • Ejaz Ahmed, Michael Jones, Tim K. Marks
Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships among mid-level features that were computed separately from the two input images.
no code implementations • CVPR 2015 • Chavdar Papazov, Tim K. Marks, Michael Jones
The matched triangular surface patches in the training set are used to compute estimates of the 3D head pose and facial landmark positions in the input depth map.
no code implementations • CVPR 2003 • Paul Viola, Michael Jones
The first is the introduction of a new image representation called the “Integral linage” which allows the features used by our detector to be computed very quickly.