Motion Estimation
205 papers with code • 0 benchmarks • 9 datasets
Motion Estimation is used to determine the block-wise or pixel-wise motion vectors between two frames.
Benchmarks
These leaderboards are used to track progress in Motion Estimation
Libraries
Use these libraries to find Motion Estimation models and implementationsDatasets
Most implemented papers
Scalable Scene Flow from Point Clouds in the Real World
In this work, we introduce a new large-scale dataset for scene flow estimation derived from corresponding tracked 3D objects, which is $\sim$1, 000$\times$ larger than previous real-world datasets in terms of the number of annotated frames.
HP-GAN: Probabilistic 3D human motion prediction via GAN
Our model, which we call HP-GAN, learns a probability density function of future human poses conditioned on previous poses.
GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos.
Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation
However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data.
Stereo relative pose from line and point feature triplets
In this work, we present two minimal solvers for the stereo relative pose.
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
Micro Expression Generation with Thin-plate Spline Motion Model and Face Parsing
We (Team: USTC-IAT-United) also compare our method with other competitors' in MEGC2022, and the expert evaluation results show that our method performs best, which verifies the effectiveness of our method.
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array.
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences.
CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction
Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms.