no code implementations • 13 Mar 2024 • Enric Corona, Andrei Zanfir, Eduard Gabriel Bazavan, Nikos Kolotouros, Thiemo Alldieck, Cristian Sminchisescu
We propose VLOGGER, a method for audio-driven human video generation from a single input image of a person, which builds on the success of recent generative diffusion models.
no code implementations • 10 Jan 2024 • Thiemo Alldieck, Nikos Kolotouros, Cristian Sminchisescu
Score Distillation Sampling (SDS) is a recent but already widely popular method that relies on an image diffusion model to control optimization problems using text prompts.
no code implementations • 4 Nov 2023 • Eduard Gabriel Bazavan, Andrei Zanfir, Thiemo Alldieck, Teodor Alexandru Szente, Mihai Zanfir, Cristian Sminchisescu
We present \emph{SPHEAR}, an accurate, differentiable parametric statistical 3D human head model, enabled by a novel 3D registration method based on spherical embeddings.
no code implementations • 14 Dec 2022 • Mihai Zanfir, Thiemo Alldieck, Cristian Sminchisescu
We present PhoMoH, a neural network methodology to construct generative models of photo-realistic 3D geometry and appearance of human heads including hair, beards, an oral cavity, and clothing.
no code implementations • CVPR 2023 • Enric Corona, Mihai Zanfir, Thiemo Alldieck, Eduard Gabriel Bazavan, Andrei Zanfir, Cristian Sminchisescu
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface.
no code implementations • CVPR 2022 • Thiemo Alldieck, Mihai Zanfir, Cristian Sminchisescu
We present PHORHUM, a novel, end-to-end trainable, deep neural network methodology for photorealistic 3D human reconstruction given just a monocular RGB image.
no code implementations • NeurIPS 2021 • Hongyi Xu, Thiemo Alldieck, Cristian Sminchisescu
This allows us to robustly fuse information from sparse views and generalize well beyond the poses or views observed in training.
1 code implementation • ICCV 2021 • Thiemo Alldieck, Hongyi Xu, Cristian Sminchisescu
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose, represented as a signed distance function.
1 code implementation • 2 Nov 2020 • Denis Tome, Thiemo Alldieck, Patrick Peluse, Gerard Pons-Moll, Lourdes Agapito, Hernan Badino, Fernando de la Torre
The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches.
1 code implementation • CVPR 2020 • Aymen Mir, Thiemo Alldieck, Gerard Pons-Moll
In this paper, we present a simple yet effective method to automatically transfer textures of clothing images (front and back) to 3D garments worn on top SMPL, in real time.
1 code implementation • CVPR 2020 • Julian Chibane, Thiemo Alldieck, Gerard Pons-Moll
To solve this, we propose Implicit Feature Networks (IF-Nets), which deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data retaining the nice properties of recent learned implicit functions, but critically they can also retain detail when it is present in the input data, and can reconstruct articulated humans.
1 code implementation • ICCV 2019 • Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, Marcus Magnor
From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing.
1 code implementation • CVPR 2019 • Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, Gerard Pons-Moll
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm.
1 code implementation • 3 Aug 2018 • Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll
We present a novel method for high detail-preserving human avatar creation from monocular video.
1 code implementation • CVPR 2018 • Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll
This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving.
no code implementations • 1 Mar 2017 • Thiemo Alldieck, Marc Kassubeck, Marcus Magnor
Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence.