Search Results for author: Chengxu Liu

Found 7 papers, 2 papers with code

Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring

no code implementations19 Apr 2024 Chengxu Liu, Xuan Wang, Xiangyu Xu, Ruhao Tian, Shuai Li, Xueming Qian, Ming-Hsuan Yang

In particular, we use a motion estimation network to capture motion information from neighborhoods, thereby adaptively estimating spatially-variant motion flow, mask, kernels, weights, and offsets to obtain the MISC Filter.

Deblurring Motion Estimation

Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera

1 code implementation8 Mar 2024 Chengxu Liu, Xuan Wang, Yuanting Fan, Shuai Li, Xueming Qian

The pixel array of light-emitting diodes used for display diffracts and attenuates incident light, causing various degradations as the light intensity changes.

Image Restoration Video Restoration

FSI: Frequency and Spatial Interactive Learning for Image Restoration in Under-Display Cameras

no code implementations ICCV 2023 Chengxu Liu, Xuan Wang, Shuai Li, Yuzhi Wang, Xueming Qian

In this paper, we introduce a new perspective to handle various diffraction in UDC images by jointly exploring the feature restoration in the frequency and spatial domains, and present a Frequency and Spatial Interactive Learning Network (FSI).

Image Restoration

CSDA: Learning Category-Scale Joint Feature for Domain Adaptive Object Detection

no code implementations ICCV 2023 Changlong Gao, Chengxu Liu, Yujie Dun, Xueming Qian

For better category-level feature alignment, we propose a novel DAOD framework of joint category and scale information, dubbed CSDA, such a design enables effective object learning for different scales.

Object object-detection +1

4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement

no code implementations5 Sep 2022 Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian

In particular, we first introduce a lightweight context encoder and a parameter encoder to learn a context map for the pixel-level category and a group of image-adaptive coefficients, respectively.

Ranked #7 on Image Enhancement on MIT-Adobe 5k (SSIM on proRGB metric)

Image Enhancement

TTVFI: Learning Trajectory-Aware Transformer for Video Frame Interpolation

no code implementations19 Jul 2022 Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian

In particular, we formulate the warped features with inconsistent motions as query tokens, and formulate relevant regions in a motion trajectory from two original consecutive frames into keys and values.

Video Frame Interpolation

Learning Trajectory-Aware Transformer for Video Super-Resolution

1 code implementation CVPR 2022 Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian

Existing approaches usually align and aggregate video frames from limited adjacent frames (e. g., 5 or 7 frames), which prevents these approaches from satisfactory results.

Video Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.