Search Results for author: Sixing Yu

Found 9 papers, 2 papers with code

Federated Foundation Models: Privacy-Preserving and Collaborative Learning for Large Models

no code implementations19 May 2023 Sixing Yu, J. Pablo Muñoz, Ali Jannesari

Foundation Models (FMs), such as LLaMA, BERT, GPT, ViT, and CLIP, have demonstrated remarkable success in a wide range of applications, driven by their ability to leverage vast amounts of data for pre-training.

Federated Learning Privacy Preserving +1

Resource-Aware Heterogeneous Federated Learning using Neural Architecture Search

no code implementations9 Nov 2022 Sixing Yu, Phuong Nguyen, Waqwoya Abebe, Justin Stanley, Pablo Munoz, Ali Jannesari

RaFL allocates resource-aware models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion.

Federated Learning Neural Architecture Search +1

Enhancing Heterogeneous Federated Learning with Knowledge Extraction and Multi-Model Fusion

no code implementations16 Aug 2022 Duy Phuong Nguyen, Sixing Yu, J. Pablo Muñoz, Ali Jannesari

This method allows efficient multi-model knowledge fusion and the deployment of resource-aware models while preserving model heterogeneity.

Federated Learning Knowledge Distillation

Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning

1 code implementation5 Feb 2021 Sixing Yu, Arya Mazaheri, Ali Jannesari

Model compression is an essential technique for deploying deep neural networks (DNNs) on power and memory-constrained resources.

Graph Embedding Model Compression +3

Auto Graph Encoder-Decoder for Neural Network Pruning

no code implementations ICCV 2021 Sixing Yu, Arya Mazaheri, Ali Jannesari

We compared our method with rule-based DNN embedding model compression methods to show the effectiveness of our method.

Model Compression Network Pruning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.