Distributed Computing

69 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Distributed Computing models and implementations
2 papers
21,622
2 papers
4,969

Datasets


Most implemented papers

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

kunglab/ddnn 6 Sep 2017

In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.

Achieving the time of $1$-NN, but the accuracy of $k$-NN

lirongx/SubNN 6 Dec 2017

The approach consists of aggregating denoised $1$-NN predictors over a small number of distributed subsamples.

Flexible and Scalable Deep Learning with MMLSpark

Azure/mmlspark 11 Apr 2018

In this work we detail a novel open source library, called MMLSpark, that combines the flexible deep learning library Cognitive Toolkit, with the distributed computing framework Apache Spark.

Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression

LiuHaiTao01/GRBCM ICML 2018

In order to scale standard Gaussian process (GP) regression to large-scale datasets, aggregation models employ factorized training process and then combine predictions from distributed experts.

MMLSpark: Unifying Machine Learning Ecosystems at Massive Scales

Azure/mmlspark 20 Oct 2018

We introduce Microsoft Machine Learning for Apache Spark (MMLSpark), an ecosystem of enhancements that expand the Apache Spark distributed computing library to tackle problems in Deep Learning, Micro-Service Orchestration, Gradient Boosting, Model Interpretability, and other areas of modern computation.

Real-time cortical simulations: energy and interconnect scaling on distributed systems

APE-group/201812RealTimeCortSim 12 Dec 2018

We demonstrate the importance of the design of low-latency interconnect for speed and energy consumption.

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

RixonC/DINGO NeurIPS 2019

For optimization of a sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods.

Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample

OptMLGroup/SQN 28 Jan 2019

We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning.

Evolutionary Neural AutoML for Deep Learning

lucylow/Covid_Control 18 Feb 2019

However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters.

Vector operations for accelerating expensive Bayesian computations -- a tutorial guide

davidwarne/Bayesian_SIMD_examples 25 Feb 2019

We illustrate the potential of SIMD for accelerating Bayesian computations and provide the reader with techniques for exploiting modern massively parallel processing environments using standard tools.