1 code implementation • 5 Nov 2023 • Azin Jahedi, Maximilian Luz, Marc Rivinius, Andrés Bruhn
Attention-based motion aggregation concepts have recently shown their usefulness in optical flow estimation, in particular when it comes to handling occluded regions.
1 code implementation • 26 Oct 2023 • Erik Scheurer, Jenny Schmalfuss, Alexander Lis, Andrés Bruhn
In this paper, we thoroughly examine the currently available detect-and-remove defenses ILP and LGS for a wide selection of state-of-the-art optical flow methods, and illuminate their side effects on the quality and robustness of the final flow predictions.
1 code implementation • ICCV 2023 • Jenny Schmalfuss, Lukas Mehl, Andrés Bruhn
Current adversarial attacks on motion estimation, or optical flow, optimize small per-pixel perturbations, which are unlikely to appear in the real world.
2 code implementations • CVPR 2023 • Lukas Mehl, Jenny Schmalfuss, Azin Jahedi, Yaroslava Nalivayko, Andrés Bruhn
While recent methods for motion and stereo estimation recover an unprecedented amount of details, such highly detailed structures are neither adequately reflected in the data of existing benchmarks nor their evaluation methodology.
1 code implementation • 30 Oct 2022 • Azin Jahedi, Maximilian Luz, Lukas Mehl, Marc Rivinius, Andrés Bruhn
In this report, we present our optical flow approach, MS-RAFT+, that won the Robust Vision Challenge 2022.
Ranked #3 on Optical Flow Estimation on Spring
no code implementations • 20 Oct 2022 • Jenny Schmalfuss, Lukas Mehl, Andrés Bruhn
Current adversarial attacks for motion estimation (optical flow) optimize small per-pixel perturbations, which are unlikely to appear in the real world.
1 code implementation • 25 Jul 2022 • Azin Jahedi, Lukas Mehl, Marc Rivinius, Andrés Bruhn
Many classical and learning-based optical flow methods rely on hierarchical concepts to improve both accuracy and robustness.
1 code implementation • 12 Jul 2022 • Lukas Mehl, Azin Jahedi, Jenny Schmalfuss, Andrés Bruhn
Secondly, and even more importantly, exploiting the specific modeling concepts of RAFT-3D, we propose a U-Net architecture that performs a fusion of forward and backward flow estimates and hence allows to integrate temporal information on demand.
Ranked #1 on Scene Flow Estimation on Spring
no code implementations • 13 May 2022 • Jenny Schmalfuss, Erik Scheurer, Heng Zhao, Nikolaos Karantzas, Andrés Bruhn, Demetrio Labate
Blind inpainting algorithms based on deep learning architectures have shown a remarkable performance in recent years, typically outperforming model-based methods both in terms of image quality and run time.
1 code implementation • 24 Mar 2022 • Jenny Schmalfuss, Philipp Scholze, Andrés Bruhn
Recent optical flow methods are almost exclusively judged in terms of accuracy, while their robustness is often neglected.
no code implementations • 10 Jan 2020 • Hui Men, Vlad Hosu, Hanhe Lin, Andrés Bruhn, Dietmar Saupe
This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images.
no code implementations • 3 Jun 2018 • Daniel Maurer, Andrés Bruhn
By relating forward and backward motion these learned models not only allow to infer valuable motion information based on the backward flow, they also help to improve the performance at occlusions, where a reliable prediction is particularly useful.
Ranked #15 on Optical Flow Estimation on Sintel-clean
no code implementations • 22 May 2015 • Yong Chul Ju, Daniel Maurer, Michael Breuß, Andrés Bruhn
First, we propose a novel variational model that operates directly on the Cartesian depth.