Search Results for author: Takafumi Taketomi

Found 6 papers, 2 papers with code

Makeup Prior Models for 3D Facial Makeup Estimation and Applications

no code implementations26 Mar 2024 Xingchao Yang, Takafumi Taketomi, Yuki Endo, Yoshihiro Kanamori

Although there is a trade-off between the two models, both are applicable to 3D facial makeup estimation and related applications.

Face Reconstruction

SuperNormal: Neural Surface Reconstruction via Multi-View Normal Integration

no code implementations8 Dec 2023 Xu Cao, Takafumi Taketomi

We present SuperNormal, a fast, high-fidelity approach to multi-view 3D reconstruction using surface normal maps.

3D Reconstruction Multi-View 3D Reconstruction +1

BlendFace: Re-designing Identity Encoders for Face-Swapping

2 code implementations ICCV 2023 Kaede Shiohara, Xingchao Yang, Takafumi Taketomi

The great advancements of generative adversarial networks and face recognition models in computer vision have made it possible to swap identities on images from single sources.

Attribute Disentanglement +2

Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition

no code implementations26 Feb 2023 Xingchao Yang, Takafumi Taketomi, Yoshihiro Kanamori

The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces.

Inverse Rendering

BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction

no code implementations19 Sep 2022 Xingchao Yang, Takafumi Taketomi

We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image.

3D Face Reconstruction Face Generation

4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface

1 code implementation ICCV 2021 Yang Li, Hikari Takehara, Takafumi Taketomi, Bo Zheng, Matthias Nießner

Tracking non-rigidly deforming scenes using range sensors has numerous applications including computer vision, AR/VR, and robotics.

Motion Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.