Distributed Computing

70 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Distributed Computing models and implementations
2 papers
21,627
2 papers
4,972

Datasets


Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations

evolutionary-intelligence/dcc 11 Apr 2023

Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer black-box optimization framework, on non-separable functions.

4
11 Apr 2023

Scalable Differentially Private Clustering via Hierarchically Separated Trees

google-research/google-research 17 Jun 2022

We study the private $k$-median and $k$-means clustering problem in $d$ dimensional Euclidean space.

32,880
17 Jun 2022

SE-MoE: A Scalable and Efficient Mixture-of-Experts Distributed Training and Inference System

PaddlePaddle/FleetX 20 May 2022

With the increasing diversity of ML infrastructures nowadays, distributed training over heterogeneous computing systems is desired to facilitate the production of big models.

422
20 May 2022

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

PaddlePaddle/Paddle 19 May 2022

We took natural language processing (NLP) as an example to show how Nebula-I works in different training phases that include: a) pre-training a multilingual language model using two remote clusters; and b) fine-tuning a machine translation model using knowledge distilled from pre-trained models, which run through the most popular paradigm of recent deep learning.

21,627
19 May 2022

BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster

intel-analytics/BigDL CVPR 2022

To address this challenge, we have open sourced BigDL 2. 0 at https://github. com/intel-analytics/BigDL/ under Apache 2. 0 license (combining the original BigDL and Analytics Zoo projects); using BigDL 2. 0, users can simply build conventional Python notebooks on their laptops (with possible AutoML support), which can then be transparently accelerated on a single node (with up-to 9. 6x speedup in our experiments), and seamlessly scaled out to a large cluster (across several hundreds servers in real-world use cases).

5,988
03 Apr 2022

Improving Response Time of Home IoT Services in Federated Learning

hwangdongjun/federated_learning_using_websockets 28 Feb 2022

Though federated learning is useful for protecting privacy, it experiences poor performance in terms of the end-to-end response time in home IoT services, because IoT devices are usually controlled by remote servers in the cloud.

1
28 Feb 2022

Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

hpcaitech/skycomputing 24 Feb 2022

In this paper, we proposed Sky Computing, a load-balanced model parallelism framework to adaptively allocate the weights to devices.

89
24 Feb 2022

LBCF: A Large-Scale Budget-Constrained Causal Forest Algorithm

www2022paper/www-2022-paper-supplementary-materials 29 Jan 2022

The proposed approach is currently serving over hundreds of millions of users on the platform and achieves one of the most tremendous improvements over these months.

29
29 Jan 2022

MCDS: AI Augmented Workflow Scheduling in Mobile Edge Cloud Computing Systems

imperial-qore/COSCO 14 Dec 2021

Workflow scheduling is a long-studied problem in parallel and distributed computing (PDC), aiming to efficiently utilize compute resources to meet user's service requirements.

72
14 Dec 2021