Search Results for author: Xu Yuan

Found 10 papers, 2 papers with code

FedClust: Optimizing Federated Learning on Non-IID Data through Weight-Driven Client Clustering

no code implementations7 Mar 2024 Md Sirajul Islam, Simin Javaherian, Fei Xu, Xu Yuan, Li Chen, Nian-Feng Tzeng

Clustered federated learning (CFL) addresses this challenge by grouping clients based on the similarity of their data distributions.

Federated Learning

Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval

1 code implementation IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 Xu Yuan, Zheng Zhang, Xunguang Wang, Lin Wu

Further, we, for the first time, formulate the formalized adversarial training of deep hashing into a unified minimax optimization under the guidance of the generated mainstay codes.

Adversarial Attack Adversarial Robustness +2

MMST-ViT: Climate Change-aware Crop Yield Prediction via Multi-Modal Spatial-Temporal Vision Transformer

1 code implementation ICCV 2023 Fudong Lin, Summer Crawford, Kaleb Guillot, Yihe Zhang, Yan Chen, Xu Yuan, Li Chen, Shelby Williams, Robert Minvielle, Xiangming Xiao, Drew Gholson, Nicolas Ashwell, Tri Setiyono, Brenda Tubana, Lu Peng, Magdy Bayoumi, Nian-Feng Tzeng

In this work, we develop a deep learning-based solution, namely Multi-Modal Spatial-Temporal Vision Transformer (MMST-ViT), for predicting crop yields at the county level across the United States, by considering the effects of short-term meteorological variations during the growing season and the long-term climate change on crops.

Contrastive Learning Crop Yield Prediction +1

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

no code implementations8 Aug 2023 Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, Xu Yuan

Federated learning (FL) has been widely deployed to enable machine learning training on sensitive data across distributed devices.

Backdoor Attack Federated Learning

Workie-Talkie: Accelerating Federated Learning by Overlapping Computing and Communications via Contrastive Regularization

no code implementations ICCV 2023 Rui Chen, Qiyu Wan, Pavana Prakash, Lan Zhang, Xu Yuan, Yanmin Gong, Xin Fu, Miao Pan

However, practical deployment of FL over mobile devices is very challenging because (i) conventional FL incurs huge training latency for mobile devices due to interleaved local computing and communications of model updates, (ii) there are heterogeneous training data across mobile devices, and (iii) mobile devices have hardware heterogeneity in terms of computing and communication capabilities.

Federated Learning

Hierarchical Multi-Interest Co-Network For Coarse-Grained Ranking

no code implementations19 Oct 2022 Xu Yuan, Chen Xu, Qiwei Chen, Tao Zhuang, Hongjie Chen, Chao Li, Junfeng Ge

This paper proposes a Hierarchical Multi-Interest Co-Network (HCN) to capture users' diverse interests in the coarse-grained ranking stage.

Accelerating Serverless Computing by Harvesting Idle Resources

no code implementations28 Aug 2021 Hanfei Yu, Hao Wang, Jian Li, Xu Yuan, Seung-Jong Park

Serverless computing automates fine-grained resource scaling and simplifies the development and deployment of online services with stateless functions.

Multiple-Input Multiple-Output Fusion Network For Generalized Zero-Shot Learning

no code implementations IEEE 2021 Fangming Zhong∗, Guangze Wang, Zhikui Chen, Xu Yuan, Feng Xia

Generalized zero-shot learning (GZSL) has attracted consid- erable attention recently, which trains models with data from seen classes and tests on data from both seen and unseen classes.

Generalized Zero-Shot Learning

Asymptotics of solutions with a compactness property for the nonlinear damped Klein-Gordon equation

no code implementations22 Feb 2021 Raphaël Côte, Xu Yuan

We consider the nonlinear damped Klein-Gordon equation \[ \partial_{tt}u+2\alpha\partial_{t}u-\Delta u+u-|u|^{p-1}u=0 \quad \text{on} \ \ [0,\infty)\times \mathbb{R}^N \] with $\alpha>0$, $2 \le N\le 5$ and energy subcritical exponents $p>2$.

Analysis of PDEs

Cannot find the paper you are looking for? You can Submit a new open access paper.