Search Results for author: Kun Yuan

Found 62 papers, 16 papers with code

QNCD: Quantization Noise Correction for Diffusion Models

1 code implementation28 Mar 2024 Huanpeng Chu, Wei Wu, Chengjie Zang, Kun Yuan

Diffusion models have revolutionized image synthesis, setting new benchmarks in quality and creativity.

Enhancing Gait Video Analysis in Neurodegenerative Diseases by Knowledge Augmentation in Vision Language Model

no code implementations20 Mar 2024 Diwei Wang, Kun Yuan, Candice Muller, Frédéric Blanc, Nicolas Padoy, Hyewon Seo

Based on a large-scale pre-trained Vision Language Model (VLM), our model learns and improves visual, textual, and numerical representations of patient gait videos, through a collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters.

Language Modelling

CasSR: Activating Image Power for Real-World Image Super-Resolution

no code implementations18 Mar 2024 Haolan Chen, Jinhua Hao, Kai Zhao, Kun Yuan, Ming Sun, Chao Zhou, Wei Hu

In particular, we develop a cascaded controllable diffusion model that aims to optimize the extraction of information from low-resolution images.

Image Restoration Image Super-Resolution

XPSR: Cross-modal Priors for Diffusion-based Image Super-Resolution

no code implementations8 Mar 2024 Yunpeng Qu, Kun Yuan, Kai Zhao, Qizhi Xie, Jinhua Hao, Ming Sun, Chao Zhou

Diffusion-based methods, endowed with a formidable generative prior, have received increasing attention in Image Super-Resolution (ISR) recently.

Image Super-Resolution

KVQ: Kwai Video Quality Assessment for Short-form Videos

no code implementations11 Feb 2024 Yiting Lu, Xin Li, Yajing Pei, Kun Yuan, Qizhi Xie, Yunpeng Qu, Ming Sun, Chao Zhou, Zhibo Chen

Short-form UGC video platforms, like Kwai and TikTok, have been an emerging and irreplaceable mainstream media form, thriving on user-friendly engagement, and kaleidoscope creation, etc.

Video Quality Assessment Visual Question Answering (VQA)

Asynchronous Diffusion Learning with Agent Subsampling and Local Updates

no code implementations8 Feb 2024 Elsa Rizk, Kun Yuan, Ali H. Sayed

In this work, we examine a network of agents operating asynchronously, aiming to discover an ideal global model that suits individual local datasets.

Federated Learning

Decentralized Bilevel Optimization over Graphs: Loopless Algorithmic Update and Transient Iteration Complexity

no code implementations5 Feb 2024 Boao Kong, Shuchen Zhu, Songtao Lu, Xinmeng Huang, Kun Yuan

In this paper, we introduce a single-loop decentralized SBO (D-SOBA) algorithm and establish its transient iteration complexity, which, for the first time, clarifies the joint influence of network topology and data heterogeneity on decentralized bilevel algorithms.

Bilevel Optimization

Advancing Surgical VQA with Scene Graph Knowledge

2 code implementations15 Dec 2023 Kun Yuan, Manasi Kattel, Joel L. Lavanchy, Nassir Navab, Vinkle Srivastav, Nicolas Padoy

We highlight that the primary limitation in the current surgical VQA systems is the lack of scene knowledge to answer complex queries.

Question Answering Visual Question Answering

Model-free Test Time Adaptation for Out-Of-Distribution Detection

no code implementations28 Nov 2023 Yifan Zhang, Xue Wang, Tian Zhou, Kun Yuan, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan

We demonstrate the effectiveness of \abbr through comprehensive experiments on multiple OOD detection benchmarks, extensive empirical studies show that \abbr significantly improves the performance of OOD detection over state-of-the-art methods.

Decision Making Out-of-Distribution Detection +2

RandCom: Random Communication Skipping Method for Decentralized Stochastic Optimization

no code implementations12 Oct 2023 Luyao Guo, Sulaiman A. Alghunaim, Kun Yuan, Laurent Condat, Jinde Cao

We analyze the performance of RandCom in stochastic non-convex, convex, and strongly convex settings and demonstrate its ability to asymptotically reduce communication overhead by the probability of communication.

Distributed Optimization Federated Learning

BEVHeight++: Toward Robust Visual Centric 3D Object Detection

no code implementations28 Sep 2023 Lei Yang, Tao Tang, Jun Li, Peng Chen, Kun Yuan, Li Wang, Yi Huang, Xinyu Zhang, Kaicheng Yu

In essence, we regress the height to the ground to achieve a distance-agnostic formulation to ease the optimization process of camera-only perception methods.

3D Object Detection Autonomous Driving +2

Capturing Co-existing Distortions in User-Generated Content for No-reference Video Quality Assessment

no code implementations31 Jul 2023 Kun Yuan, Zishang Kong, Chuanchuan Zheng, Ming Sun, Xing Wen

\textit{Second}, the perceptual quality of a video exhibits a multi-distortion distribution, due to the differences in the duration and probability of occurrence for various distortions.

Action Recognition Blocking +2

Learning Multi-modal Representations by Watching Hundreds of Surgical Video Lectures

1 code implementation27 Jul 2023 Kun Yuan, Vinkle Srivastav, Tong Yu, Joel L. Lavanchy, Pietro Mascagni, Nassir Navab, Nicolas Padoy

SurgVLP constructs a new contrastive learning objective to align video clip embeddings with the corresponding multiple text embeddings by bringing them together within a joint latent space.

Automatic Speech Recognition Contrastive Learning +6

Momentum Benefits Non-IID Federated Learning Simply and Provably

no code implementations28 Jun 2023 Ziheng Cheng, Xinmeng Huang, Pengfei Wu, Kun Yuan

When all clients participate in the training process, we demonstrate that incorporating momentum allows FedAvg to converge without relying on the assumption of bounded data heterogeneity even using a constant local learning rate.

Federated Learning

DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm

1 code implementation1 Jun 2023 Lisang Ding, Kexin Jin, Bicheng Ying, Kun Yuan, Wotao Yin

Their communication, governed by the communication topology and gossip weight matrices, facilitates the exchange of model updates.

Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?

no code implementations NeurIPS 2023 Yutong He, Xinmeng Huang, Kun Yuan

Our results reveal that using independent unbiased compression can reduce the total communication cost by a factor of up to $\Theta(\sqrt{\min\{n, \kappa\}})$ when all local smoothness constants are constrained by a common upper bound, where $n$ is the number of workers and $\kappa$ is the condition number of the functions being minimized.

Distributed Optimization

Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression

no code implementations12 May 2023 Yutong He, Xinmeng Huang, Yiming Chen, Wotao Yin, Kun Yuan

In this paper, we investigate the performance limit of distributed stochastic optimization algorithms employing communication compression.

Stochastic Optimization

AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation

1 code implementation25 Apr 2023 Yi-Fan Zhang, Xue Wang, Kexin Jin, Kun Yuan, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan

In particular, when the adaptation target is a series of domains, the adaptation accuracy of AdaNPC is 50% higher than advanced TTA methods.

Domain Generalization Test-time Adaptation

Zoom-VQA: Patches, Frames and Clips Integration for Video Quality Assessment

1 code implementation13 Apr 2023 Kai Zhao, Kun Yuan, Ming Sun, Xing Wen

Video quality assessment (VQA) aims to simulate the human perception of video quality, which is influenced by factors ranging from low-level color and texture details to high-level semantic content.

Video Quality Assessment Visual Question Answering (VQA)

BEVHeight: A Robust Framework for Vision-based Roadside 3D Object Detection

1 code implementation CVPR 2023 Lei Yang, Kaicheng Yu, Tao Tang, Jun Li, Kun Yuan, Li Wang, Xinyu Zhang, Peng Chen

In essence, instead of predicting the pixel-wise depth, we regress the height to the ground to achieve a distance-agnostic formulation to ease the optimization process of camera-only perception methods.

3D Object Detection Autonomous Driving +1

Quality-aware Pre-trained Models for Blind Image Quality Assessment

no code implementations CVPR 2023 Kai Zhao, Kun Yuan, Ming Sun, Mading Li, Xing Wen

Blind image quality assessment (BIQA) aims to automatically evaluate the perceived quality of a single image, whose performance has been improved by deep learning-based methods in recent years.

Blind Image Quality Assessment Self-Supervised Learning

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks

no code implementations1 Nov 2022 Xinmeng Huang, Kun Yuan

The main difficulties lie in how to gauge the effectiveness when transmitting messages between two nodes via time-varying communications, and how to establish the lower bound when the network size is fixed (which is a prerequisite in stochastic optimization).

Federated Learning Stochastic Optimization

Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization

no code implementations14 Oct 2022 Kun Yuan, Xinmeng Huang, Yiming Chen, Xiaohan Zhang, Yingya Zhang, Pan Pan

While (Lu and Sa, 2021) have recently provided an optimal rate for non-convex stochastic decentralized optimization with weight matrices defined over linear graphs, the optimal rate with general weight matrices remains unclear.

Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate

1 code implementation14 Oct 2022 Zhuoqing Song, Weijian Li, Kexin Jin, Lei Shi, Ming Yan, Wotao Yin, Kun Yuan

In the proposed family, EquiStatic has a degree of $\Theta(\ln(n))$, where $n$ is the network size, and a series of time-dependent one-peer topologies, EquiDyn, has a constant degree of 1.

On the Performance of Gradient Tracking with Local Updates

no code implementations10 Oct 2022 Edward Duc Hien Nguyen, Sulaiman A. Alghunaim, Kun Yuan, César A. Uribe

We study the decentralized optimization problem where a network of $n$ agents seeks to minimize the average of a set of heterogeneous non-convex cost functions distributedly.

Federated Learning

Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression

no code implementations8 Jun 2022 Xinmeng Huang, Yiming Chen, Wotao Yin, Kun Yuan

We establish a convergence lower bound for algorithms whether using unbiased or contractive compressors in unidirection or bidirection.

Distributed Optimization

Heavy-Tail Phenomenon in Decentralized SGD

no code implementations13 May 2022 Mert Gurbuzbalaban, Yuanhan Hu, Umut Simsekli, Kun Yuan, Lingjiong Zhu

To have a more explicit control on the tail exponent, we then consider the case where the loss at each node is a quadratic, and show that the tail-index can be estimated as a function of the step-size, batch-size, and the topological properties of the network of the computational nodes.

Stochastic Optimization

ShowFace: Coordinated Face Inpainting with Memory-Disentangled Refinement Networks

no code implementations6 Apr 2022 Zhuojie Wu, Xingqun Qi, Zijian Wang, Wanting Zhou, Kun Yuan, Muyi Sun, Zhenan Sun

Furthermore, to better improve the inter-coordination between the corrupted and non-corrupted regions and enhance the intra-coordination in corrupted regions, we design InCo2 Loss, a pair of similarity based losses to constrain the feature consistency.

Disentanglement Facial Inpainting

CHEX: CHannel EXploration for CNN Model Compression

1 code implementation CVPR 2022 Zejiang Hou, Minghai Qin, Fei Sun, Xiaolong Ma, Kun Yuan, Yi Xu, Yen-Kuang Chen, Rong Jin, Yuan Xie, Sun-Yuan Kung

However, conventional pruning methods have limitations in that: they are restricted to pruning process only, and they require a fully pre-trained large model.

Image Classification Instance Segmentation +4

An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders

no code implementations NeurIPS 2021 Xinmeng Huang, Kun Yuan, Xianghui Mao, Wotao Yin

In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds.

BlueFog: Make Decentralized Algorithms Practical for Optimization and Deep Learning

2 code implementations8 Nov 2021 Bicheng Ying, Kun Yuan, Hanbin Hu, Yiming Chen, Wotao Yin

On mainstream DNN training tasks, BlueFog reaches a much higher throughput and achieves an overall $1. 2\times \sim 1. 8\times$ speedup over Horovod, a state-of-the-art distributed deep learning package based on Ring-Allreduce.

Exponential Graph is Provably Efficient for Decentralized Deep Training

2 code implementations NeurIPS 2021 Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, Wotao Yin

Experimental results on a variety of tasks and models demonstrate that decentralized (momentum) SGD over exponential graphs promises both fast and high-quality training.

Communicate Then Adapt: An Effective Decentralized Adaptive Method for Deep Training

no code implementations29 Sep 2021 Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Yingya Zhang, Pan Pan, Wotao Yin

Decentralized adaptive gradient methods, in which each node averages only with its neighbors, are critical to save communication and wall-clock training time in deep learning tasks.

Decentralized Composite Optimization with Compression

no code implementations10 Aug 2021 Yao Li, Xiaorui Liu, Jiliang Tang, Ming Yan, Kun Yuan

Decentralized optimization and communication compression have exhibited their great potential in accelerating distributed machine learning by mitigating the communication bottleneck in practice.

Effective Model Sparsification by Scheduled Grow-and-Prune Methods

1 code implementation ICLR 2022 Xiaolong Ma, Minghai Qin, Fei Sun, Zejiang Hou, Kun Yuan, Yi Xu, Yanzhi Wang, Yen-Kuang Chen, Rong Jin, Yuan Xie

It addresses the shortcomings of the previous works by repeatedly growing a subset of layers to dense and then pruning them back to sparse after some training.

Image Classification

Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD

no code implementations17 May 2021 Kun Yuan, Sulaiman A. Alghunaim, Xinmeng Huang

For smooth objective functions, the transient stage (which measures the number of iterations the algorithm has to experience before achieving the linear speedup stage) of D-SGD is on the order of ${\Omega}(n/(1-\beta)^2)$ and $\Omega(n^3/(1-\beta)^4)$ for strongly and generally convex cost functions, respectively, where $1-\beta \in (0, 1)$ is a topology-dependent quantity that approaches $0$ for a large and sparse network.

Stochastic Optimization

Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders

no code implementations25 Apr 2021 Xinmeng Huang, Kun Yuan, Xianghui Mao, Wotao Yin

In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction.

DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training

1 code implementation ICCV 2021 Kun Yuan, Yiming Chen, Xinmeng Huang, Yingya Zhang, Pan Pan, Yinghui Xu, Wotao Yin

Experimental results on a variety of computer vision tasks and models demonstrate that DecentLaM promises both efficient and high-quality training.

Differentiable Network Adaption with Elastic Search Space

no code implementations30 Mar 2021 Shaopeng Guo, Yujie Wang, Kun Yuan, Quanquan Li

In this paper we propose a novel network adaption method called Differentiable Network Adaption (DNA), which can adapt an existing network to a specific computation budget by adjusting the width and depth in a differentiable manner.

Neural Architecture Search

Incorporating Convolution Designs into Visual Transformers

3 code implementations ICCV 2021 Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, Wei Wu

Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge some attempts (e. g., ViT and DeiT) to apply Transformers to the vision domain.

Image Classification

Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

4 code implementations ICLR 2021 Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, Hongsheng Li

In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs.

ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs

no code implementations25 Sep 2019 Ernest K. Ryu, Kun Yuan, Wotao Yin

Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.

Diving into Optimization of Topology in Neural Networks

no code implementations25 Sep 2019 Kun Yuan, Quanquan Li, Yucong Zhou, Jing Shao, Junjie Yan

Seeking effective networks has become one of the most crucial and practical areas in deep learning.

Face Recognition Image Classification +2

ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems

no code implementations26 May 2019 Ernest K. Ryu, Kun Yuan, Wotao Yin

Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.

On the Influence of Bias-Correction on Distributed Stochastic Optimization

no code implementations26 Mar 2019 Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed

It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.

Stochastic Optimization

Supervised Learning Under Distributed Features

no code implementations29 May 2018 Bicheng Ying, Kun Yuan, Ali H. Sayed

This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios.

Stochastic Learning under Random Reshuffling with Constant Step-sizes

no code implementations21 Mar 2018 Bicheng Ying, Kun Yuan, Stefan Vlaski, Ali H. Sayed

In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly.

Variance-Reduced Stochastic Learning under Random Reshuffling

no code implementations4 Aug 2017 Bicheng Ying, Kun Yuan, Ali H. Sayed

First, it resolves this open issue and provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA; the argument is also adaptable to other variance-reduced algorithms.

Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling

no code implementations4 Aug 2017 Kun Yuan, Bicheng Ying, Jiageng Liu, Ali H. Sayed

For such situations, the balanced gradient computation property of AVRG becomes a real advantage in reducing idle time caused by unbalanced local data storage requirements, which is characteristic of other reduced-variance gradient algorithms.

On the Influence of Momentum Acceleration on Online Learning

no code implementations14 Mar 2016 Kun Yuan, Bicheng Ying, Ali H. Sayed

The article examines in some detail the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime.

Online Dual Coordinate Ascent Learning

no code implementations24 Feb 2016 Bicheng Ying, Kun Yuan, Ali H. Sayed

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees.

Cannot find the paper you are looking for? You can Submit a new open access paper.