Motion Estimation

205 papers with code • 0 benchmarks • 9 datasets

Motion Estimation is used to determine the block-wise or pixel-wise motion vectors between two frames.

Source: MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement

Libraries

Use these libraries to find Motion Estimation models and implementations

Most implemented papers

Scalable Scene Flow from Point Clouds in the Real World

kth-rpl/deflow 1 Mar 2021

In this work, we introduce a new large-scale dataset for scene flow estimation derived from corresponding tracked 3D objects, which is $\sim$1, 000$\times$ larger than previous real-world datasets in terms of the number of annotated frames.

HP-GAN: Probabilistic 3D human motion prediction via GAN

ebarsoum/hpgan 27 Nov 2017

Our model, which we call HP-GAN, learns a probability density function of future human poses conditioned on previous poses.

GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose

yzcjtr/GeoNet CVPR 2018

We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos.

Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation

Fhrozen/motion_dance 3 Jul 2018

However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data.

Stereo relative pose from line and point feature triplets

alexandervakhitov/sego-paper-code ECCV 2018

In this work, we present two minimal solvers for the stereo relative pose.

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving

qcraftai/simtrack ICCV 2021

3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.

Micro Expression Generation with Thin-plate Spline Motion Model and Face Parsing

howtonameme/micro-expression MM '22: Proceedings of the 30th ACM International Conference on Multimedia 2022

We (Team: USTC-IAT-United) also compare our method with other competitors' in MEGC2022, and the expert evaluation results show that our method performs best, which verifies the effectiveness of our method.

The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

uzh-rpg/rpg_davis_simulator 26 Oct 2016

New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array.

Unsupervised Learning of Depth and Ego-Motion from Video

tinghuiz/SfMLearner CVPR 2017

We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences.

CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction

yan99033/CNN-SVO 1 Oct 2018

Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms.