Optical Flow Estimation
652 papers with code • 10 benchmarks • 34 datasets
Optical Flow Estimation is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.
Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.
Further readings:
Definition source: Devon: Deformable Volume Network for Learning Optical Flow
Image credit: Optical Flow Estimation
Libraries
Use these libraries to find Optical Flow Estimation models and implementationsDatasets
Latest papers with no code
FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features
The task of face reenactment is to transfer the head motion and facial expressions from a driving video to the appearance of a source image, which may be of a different person (cross-reenactment).
Table tennis ball spin estimation with an event camera
In table tennis, the combination of high velocity and spin renders traditional low frame rate cameras inadequate for quickly and accurately observing the ball's logo to estimate the spin due to the motion blur.
Chaos in Motion: Unveiling Robustness in Remote Heart Rate Measurement through Brain-Inspired Skin Tracking
To address these issues, we regard the remote heart rate measurement as the process of analyzing the spatiotemporal characteristics of the optical flow signal in the video.
SciFlow: Empowering Lightweight Optical Flow Models with Self-Cleaning Iterations
Optical flow estimation is crucial to a variety of vision tasks.
MemFlow: Optical Flow Estimation and Prediction with Memory
To this end, we present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Salient Sparse Visual Odometry With Pose-Only Supervision
Visual Odometry (VO) is vital for the navigation of autonomous systems, providing accurate position and orientation estimates at reasonable costs.
LoSA: Long-Short-range Adapter for Scaling End-to-End Temporal Action Localization
Temporal Action Localization (TAL) involves localizing and classifying action snippets in an untrimmed video.
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models.
FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation
To address these issues, existing approaches use additional semantic priori black-box networks to separate moving objects and improve the model only at the loss level.
$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis
To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.