1 code implementation • 3 Aug 2023 • Mihai Fieraru, Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Vlad Olaru, Cristian Sminchisescu
Understanding 3d human interactions is fundamental for fine-grained scene analysis and behavioural modeling.
no code implementations • NeurIPS 2021 • Mihai Fieraru, Mihai Zanfir, Teodor Szente, Eduard Bazavan, Vlad Olaru, Cristian Sminchisescu
We introduce a novel unified model for self- and interpenetration-collisions based on a mesh approximation computed by applying decimation operators.
no code implementations • CVPR 2021 • Mihai Fieraru, Mihai Zanfir, Silviu Cristian Pirlea, Vlad Olaru, Cristian Sminchisescu
AIFit is able to reconstruct 3d human pose and motion, reliably segment exercise repetitions, and identify in real-time the deviations between standards learnt from trainers, and the execution of a trainee.
no code implementations • 18 Dec 2020 • Mihai Fieraru, Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Vlad Olaru, Cristian Sminchisescu
Monocular estimation of three dimensional human self-contact is fundamental for detailed scene analysis including body language understanding and behaviour modeling.
no code implementations • CVPR 2018 • Elisabeta Marinoiu, Mihai Zanfir, Vlad Olaru, Cristian Sminchisescu
We introduce new, fine-grained action and emotion recognition tasks defined on non-staged videos, recorded during robot-assisted therapy sessions of children with autism.
no code implementations • 20 Sep 2015 • Vlad Olaru, Mihai Florea, Cristian Sminchisescu
This paper presents a framework that supports the implementation of parallel solutions for the widespread parametric maximum flow computational routines used in image segmentation algorithms.
1 code implementation • IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 36 , Issue: 7 , July 2014 ) 2013 • Catalin Ionescu, Dragos Papava, Vlad Olaru, Cristian Sminchisescu
We introduce a new dataset, Human3. 6M, of 3. 6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms.