Motion Estimation
208 papers with code • 0 benchmarks • 10 datasets
Motion Estimation is used to determine the block-wise or pixel-wise motion vectors between two frames.
Benchmarks
These leaderboards are used to track progress in Motion Estimation
Libraries
Use these libraries to find Motion Estimation models and implementationsDatasets
Latest papers
Loss it right: Euclidean and Riemannian Metrics in Learning-based Visual Odometry
This paper overviews different pose representations and metric functions in visual odometry (VO) networks.
RMS: Redundancy-Minimizing Point Cloud Sampling for Real-Time Pose Estimation
The typical point cloud sampling methods used in state estimation for mobile robots preserve a high level of point redundancy.
Fully Convolutional Slice-to-Volume Reconstruction for Single-Stack MRI
Here, we propose a SVR method that overcomes the shortcomings of previous work and produces state-of-the-art reconstructions in the presence of extreme inter-slice motion.
GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
Joint 3D Shape and Motion Estimation from Rolling Shutter Light-Field Images
In this paper, we propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor.
USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields
USB-NeRF is able to correct rolling shutter distortions and recover accurate camera motion trajectory simultaneously under the framework of NeRF, by modeling the physical image formation process of a RS camera.
IBVC: Interpolation-driven B-frame Video Compression
Learned B-frame video compression aims to adopt bi-directional motion estimation and motion compensation (MEMC) coding for middle frame reconstruction.
RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud
Mobile autonomy relies on the precise perception of dynamic environments.
Staged Contact-Aware Global Human Motion Forecasting
So far, only Mao et al. NeurIPS'22 have addressed scene-aware global motion, cascading the prediction of future scene contact points and the global motion estimation.
Constrained CycleGAN for Effective Generation of Ultrasound Sector Images of Improved Spatial Resolution
In vitro phantom results demonstrate that CCycleGAN successfully generates images with improved spatial resolution as well as higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared with benchmarks.