1 code implementation • ICCV 2023 • Binghui Zuo, Zimeng Zhao, Wenqian Sun, Wei Xie, Zhou Xue, Yangang Wang
Our key idea is to first construct a two-hand interaction prior and recast the interaction reconstruction task as the conditional sampling from the prior.
no code implementations • ICCV 2023 • Wei Xie, Zimeng Zhao, Shiying Li, Binghui Zuo, Yangang Wang
Based on this representation, our Regional Unwrapping Transformer (RUFormer) learns the correlation priors across regions from monocular inputs and predicts corresponding contact and deformed transformations.
no code implementations • CVPR 2023 • Zimeng Zhao, Binghui Zuo, Zhiyu Long, Yangang Wang
The core of our approach is to first disentangle the bare hand structure from those degraded images and then wrap the appearance to this structure with a dual adversarial discrimination (DAD) scheme.
no code implementations • 18 Jan 2023 • Wei Xie, Zhipeng Yu, Zimeng Zhao, Binghui Zuo, Yangang Wang
We construct the first markerless deformable interaction dataset recording interactive motions of the hands and deformable objects, called HMDO (Hand Manipulation with Deformable Objects).
no code implementations • CVPR 2022 • Zimeng Zhao, Binghui Zuo, Wei Xie, Yangang Wang
Our key idea is to reconstruct the contact pattern directly from monocular images, and then utilize the physical stability criterion in the simulation to optimize it.
no code implementations • ICCV 2021 • Zimeng Zhao, Xi Zhao, Yangang Wang
In this paper, we embed physical constraints on the per-frame estimated motions in both spatial and temporal space.