Distributed Computing
69 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Distributed Computing
Libraries
Use these libraries to find Distributed Computing models and implementationsMost implemented papers
Distributed Deep Neural Networks over the Cloud, the Edge and End Devices
In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
Achieving the time of $1$-NN, but the accuracy of $k$-NN
The approach consists of aggregating denoised $1$-NN predictors over a small number of distributed subsamples.
Flexible and Scalable Deep Learning with MMLSpark
In this work we detail a novel open source library, called MMLSpark, that combines the flexible deep learning library Cognitive Toolkit, with the distributed computing framework Apache Spark.
Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression
In order to scale standard Gaussian process (GP) regression to large-scale datasets, aggregation models employ factorized training process and then combine predictions from distributed experts.
MMLSpark: Unifying Machine Learning Ecosystems at Massive Scales
We introduce Microsoft Machine Learning for Apache Spark (MMLSpark), an ecosystem of enhancements that expand the Apache Spark distributed computing library to tackle problems in Deep Learning, Micro-Service Orchestration, Gradient Boosting, Model Interpretability, and other areas of modern computation.
Real-time cortical simulations: energy and interconnect scaling on distributed systems
We demonstrate the importance of the design of low-latency interconnect for speed and energy consumption.
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
For optimization of a sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods.
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning.
Evolutionary Neural AutoML for Deep Learning
However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters.
Vector operations for accelerating expensive Bayesian computations -- a tutorial guide
We illustrate the potential of SIMD for accelerating Bayesian computations and provide the reader with techniques for exploiting modern massively parallel processing environments using standard tools.