Search Results for author: Mengyang Feng

Found 14 papers, 5 papers with code

DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaptation by Combining 3D GANs and Diffusion Priors

no code implementations28 Dec 2023 Biwen Lei, Kai Yu, Mengyang Feng, Miaomiao Cui, Xuansong Xie

Extensive experiments demonstrate that the proposed framework achieves excellent results in both domain adaptation and text-to-avatar tasks, outperforming existing methods in terms of generation quality and efficiency.

3D Generation Domain Adaptation

DreaMoving: A Human Video Generation Framework based on Diffusion Models

no code implementations8 Dec 2023 Mengyang Feng, Jinlin Liu, Kai Yu, Yuan YAO, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Xiaoyang Kang, Biwen Lei, Miaomiao Cui, Peiran Ren, Xuansong Xie

In this paper, we present DreaMoving, a diffusion-based controllable video generation framework to produce high-quality customized human videos.

Video Generation

Boosting3D: High-Fidelity Image-to-3D by Boosting 2D Diffusion Prior to 3D Prior with Progressive Learning

no code implementations22 Nov 2023 Kai Yu, Jinlin Liu, Mengyang Feng, Miaomiao Cui, Xuansong Xie

After the progressive training, the LoRA learns the 3D information of the generated object and eventually turns to an object-level 3D prior.

3D Generation Image to 3D +1

Diffusion360: Seamless 360 Degree Panoramic Image Generation based on Diffusion Models

1 code implementation22 Nov 2023 Mengyang Feng, Jinlin Liu, Miaomiao Cui, Xuansong Xie

This is a technical report on the 360-degree panoramic image generation task based on diffusion models.

Denoising Image Generation

Attentive Feedback Network for Boundary-Aware Salient Object Detection

1 code implementation CVPR 2019 Mengyang Feng, Huchuan Lu, Errui Ding

Recent deep learning based salient object detection methods achieve gratifying performance built upon Fully Convolutional Neural Networks (FCNs).

Object object-detection +2

Structured Siamese Network for Real-Time Visual Tracking

no code implementations ECCV 2018 Yunhua Zhang, Lijun Wang, Jinqing Qi, Dong Wang, Mengyang Feng, Huchuan Lu

In this paper, we circumvent this issue by proposing a local structure learning method, which simultaneously considers the local patterns of the target and their structural relationships for more accurate target tracking.

Real-Time Visual Tracking

Learning to Promote Saliency Detectors

1 code implementation CVPR 2018 Yu Zeng, Huchuan Lu, Lihe Zhang, Mengyang Feng, Ali Borji

The categories and appearance of salient objects vary from image to image, therefore, saliency detection is an image-specific task.

Saliency Detection Small Data Image Classification +1

An Unsupervised Game-Theoretic Approach to Saliency Detection

no code implementations8 Aug 2017 Yu Zeng, Huchuan Lu, Ali Borji, Mengyang Feng

Saliency maps are generated according to each region's strategy in the Nash equilibrium of the proposed Saliency Game.

object-detection RGB Salient Object Detection +2

Hierarchical Cellular Automata for Visual Saliency

1 code implementation26 May 2017 Yao Qin, Mengyang Feng, Huchuan Lu, Garrison W. Cottrell

The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results.

Saliency Detection

Vanishing point attracts gaze in free-viewing and visual search tasks

no code implementations6 Dec 2015 Ali Borji, Mengyang Feng

In the second experiment, we asked 14 subjects (4 female, mean age 23. 07, SD=1. 26) to search for a target character (T or L) placed randomly on a 3x3 imaginary grid overlaid on top of an image.

Fixation prediction with a combined model of bottom-up saliency and vanishing point

no code implementations6 Dec 2015 Mengyang Feng, Ali Borji, Huchuan Lu

By predicting where humans look in natural scenes, we can understand how they perceive complex natural scenes and prioritize information for further high-level visual processing.

Cannot find the paper you are looking for? You can Submit a new open access paper.