Search Results for author: Yuzhen Niu

Found 6 papers, 1 papers with code

Learning-Based Video Coding with Joint Deep Compression and Enhancement

no code implementations29 Nov 2021 Tiesong Zhao, Weize Feng, Hongji Zeng, Yuzhen Niu, Jiaying Liu

Second, we reuse the DPEG network in both motion compensation and quality enhancement modules, which are further combined with other necessary modules to formulate our JCEVC framework.

Generative Adversarial Network Motion Compensation +4

Coarse-To-Fine Person Re-Identification With Auxiliary-Domain Classification and Second-Order Information Bottleneck

no code implementations CVPR 2021 Anguo Zhang, Yueming Gao, Yuzhen Niu, Wenxi Liu, Yongcheng Zhou

Person re-identification (Re-ID) is to retrieve a particular person captured by different cameras, which is of great significance for security surveillance and pedestrian behavior analysis.

domain classification Miscellaneous +1

HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with Large Motions

1 code implementation3 Jul 2020 Yuzhen Niu, Jianbin Wu, Wenxi Liu, Wenzhong Guo, Rynson W. H. Lau

To address these two problems, we propose in this paper a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.

HDR Reconstruction Image Reconstruction

Over-crowdedness Alert! Forecasting the Future Crowd Distribution

no code implementations9 Jun 2020 Yuzhen Niu, Weifeng Shi, Wenxi Liu, Shengfeng He, Jia Pan, Antoni B. Chan

In this paper, we formulate a novel crowd analysis problem, in which we aim to predict the crowd distribution in the near future given sequential frames of a crowd video without any identity annotations.

Saliency Aggregation: A Data-Driven Approach

no code implementations CVPR 2013 Long Mai, Yuzhen Niu, Feng Liu

Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among individual methods and the performance dependence of each method on individual images.

Cannot find the paper you are looking for? You can Submit a new open access paper.