Search Results for author: Jun Xiang

Found 8 papers, 2 papers with code

FlashAvatar: High-fidelity Head Avatar with Efficient Gaussian Embedding

no code implementations3 Dec 2023 Jun Xiang, Xuan Gao, Yudong Guo, Juyong Zhang

We propose FlashAvatar, a novel and lightweight 3D animatable avatar representation that could reconstruct a digital avatar from a short monocular video sequence in minutes and render high-fidelity photo-realistic images at 300FPS on a consumer-grade GPU.

Face Model

Mini-PointNetPlus: a local feature descriptor in deep learning model for 3d environment perception

no code implementations25 Jul 2023 Chuanyu Luo, Nuo Cheng, Sikun Ma, Jun Xiang, Xiaohan Li, Shengguang Lei, Pu Li

The pioneer work PointNet has been widely applied as a local feature descriptor, a fundamental component in deep learning models for 3D perception, to extract features of a point cloud.

Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video

1 code implementation12 Oct 2022 Xuan Gao, Chenglai Zhong, Jun Xiang, Yang Hong, Yudong Guo, Juyong Zhang

We present a novel semantic model for human head defined with neural radiance field.

A Novel Framework to Jointly Compress and Index Remote Sensing Images for Efficient Content-Based Retrieval

no code implementations17 Jan 2022 Gencer Sumbul, Jun Xiang, Nimisha Thekke Madam, Begüm Demir

We also introduce a two stage learning strategy with gradient manipulation techniques to obtain image representations that are compatible with both RS image indexing and compression.

Content-Based Image Retrieval Image Compression +1

End-to-End Learning Deep CRF models for Multi-Object Tracking

no code implementations29 Jul 2019 Jun Xiang, Ma Chao, Guohan Xu, Jianhua Hou

In this paper, we propose learning deep conditional random field (CRF) networks, aiming to model the assignment costs as unary potentials and the long-term dependencies among detection results as pairwise potentials.

Multi-Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.