Distributed Computing
70 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Distributed Computing
Libraries
Use these libraries to find Distributed Computing models and implementationsLatest papers
Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations
Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer black-box optimization framework, on non-separable functions.
fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms
The package is open source and can be installed through PyPI.
Scalable Differentially Private Clustering via Hierarchically Separated Trees
We study the private $k$-median and $k$-means clustering problem in $d$ dimensional Euclidean space.
SE-MoE: A Scalable and Efficient Mixture-of-Experts Distributed Training and Inference System
With the increasing diversity of ML infrastructures nowadays, distributed training over heterogeneous computing systems is desired to facilitate the production of big models.
Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
We took natural language processing (NLP) as an example to show how Nebula-I works in different training phases that include: a) pre-training a multilingual language model using two remote clusters; and b) fine-tuning a machine translation model using knowledge distilled from pre-trained models, which run through the most popular paradigm of recent deep learning.
BigDL 2.0: Seamless Scaling of AI Pipelines from Laptops to Distributed Cluster
To address this challenge, we have open sourced BigDL 2. 0 at https://github. com/intel-analytics/BigDL/ under Apache 2. 0 license (combining the original BigDL and Analytics Zoo projects); using BigDL 2. 0, users can simply build conventional Python notebooks on their laptops (with possible AutoML support), which can then be transparently accelerated on a single node (with up-to 9. 6x speedup in our experiments), and seamlessly scaled out to a large cluster (across several hundreds servers in real-world use cases).
Improving Response Time of Home IoT Services in Federated Learning
Though federated learning is useful for protecting privacy, it experiences poor performance in terms of the end-to-end response time in home IoT services, because IoT devices are usually controlled by remote servers in the cloud.
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning
In this paper, we proposed Sky Computing, a load-balanced model parallelism framework to adaptively allocate the weights to devices.
LBCF: A Large-Scale Budget-Constrained Causal Forest Algorithm
The proposed approach is currently serving over hundreds of millions of users on the platform and achieves one of the most tremendous improvements over these months.
MCDS: AI Augmented Workflow Scheduling in Mobile Edge Cloud Computing Systems
Workflow scheduling is a long-studied problem in parallel and distributed computing (PDC), aiming to efficiently utilize compute resources to meet user's service requirements.