1 code implementation • 22 Nov 2023 • Zhiqin Chen, Qimin Chen, Hang Zhou, Hao Zhang
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection.
1 code implementation • 8 Jun 2023 • Qimin Chen, Zhiqin Chen, Hang Zhou, Hao Zhang
Furthermore, we showcase the ability of our method to learn geometric details and textures from shapes reconstructed from real-world photos.
no code implementations • 6 Mar 2023 • Zhiqin Chen
With the recent advances in hardware and rendering techniques, 3D models have emerged everywhere in our life.
1 code implementation • CVPR 2023 • Zhiqin Chen, Thomas Funkhouser, Peter Hedman, Andrea Tagliasacchi
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views.
Ranked #1 on Novel View Synthesis on Mip-NeRF 360
no code implementations • CVPR 2022 • Zhiqin Chen, Kangxue Yin, Sanja Fidler
In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis.
2 code implementations • 4 Feb 2022 • Zhiqin Chen, Andrea Tagliasacchi, Thomas Funkhouser, Hao Zhang
We introduce neural dual contouring (NDC), a new data-driven approach to mesh reconstruction based on dual contouring (DC).
1 code implementation • 27 Jun 2021 • Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang
The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built over a set of planes, where the planes and convexes are both defined by learned network weights.
1 code implementation • 21 Jun 2021 • Zhiqin Chen, Hao Zhang
To tackle these challenges, we re-cast MC from a deep learning perspective, by designing tessellation templates more apt at preserving geometric features, and learning the vertex positions and mesh topologies from training meshes, to account for contextual information from nearby cubes.
no code implementations • CVPR 2022 • Fenggen Yu, Zhiqin Chen, Manyi Li, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang
We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models, in the form of adaptive primitive assemblies.
1 code implementation • CVPR 2021 • Zhiqin Chen, Vladimir G. Kim, Matthew Fisher, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri
During testing, a style code is fed into the generator to condition the refinement.
no code implementations • 5 Aug 2020 • Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher, Vladimir G. Kim, Hao Zhang
We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections.
3 code implementations • CVPR 2020 • Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang
The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes.
1 code implementation • ICCV 2019 • Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, Hao Zhang
The unsupervised BAE-NET is trained with a collection of un-segmented shapes, using a shape reconstruction loss, without any ground-truth labels.
no code implementations • 25 Mar 2019 • Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, Hao Zhang
Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes concatenate multi-scale shape features, resulting in an overcomplete representation.
4 code implementations • CVPR 2019 • Zhiqin Chen, Hao Zhang
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes.
2 code implementations • 22 Mar 2018 • Zili Yi, Zhiqin Chen, Hao Cai, Wendong Mao, Minglun Gong, Hao Zhang
The key feature of BSD-GAN is that it is trained in multiple branches, progressively covering both the breadth and depth of the network, as resolutions of the training images increase to reveal finer-scale features.