Defocus Blur Detection

7 papers with code • 3 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Explicit Visual Prompting for Universal Foreground Segmentations

nifangbaage/explicit-visual-prompt 29 May 2023

We take inspiration from the widely-used pre-training and then prompt tuning protocols in NLP and propose a new visual prompting model, named Explicit Visual Prompting (EVP).

Defocus Blur Detection via Depth Distillation

vinthony/depth-distillation ECCV 2020

In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network at the same time.

Self-Generated Defocus Blur Detection via Dual Adversarial Discriminators

shangcai1/SG CVPR 2021

The core insight is that a defocus blur region/focused clear area can be arbitrarily pasted to a given realistic full blurred image/full clear image without affecting the judgment of the full blurred image/full clear image.

Explicit Visual Prompting for Low-Level Structure Segmentations

nifangbaage/explicit-visual-prompt CVPR 2023

Different from the previous visual prompting which is typically a dataset-level implicit embedding, our key insight is to enforce the tunable parameters focusing on the explicit visual content from each individual image, i. e., the features from frozen patch embeddings and the input's high-frequency components.

Depth and DOF Cues Make A Better Defocus Blur Detector

yuxinjin-whu/d-dffnet 20 Jun 2023

Our method proposes a depth feature distillation strategy to obtain depth knowledge from a pre-trained monocular depth estimation model and uses a DOF-edge loss to understand the relationship between DOF and depth.

Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs

neelkanwal/equipping-computational-pathology-systems-with-artifact-processing-pipeline 12 Mar 2024

We developed DL pipelines using two MoEs and two multiclass models of state-of-the-art deep convolutional neural networks (DCNNs) and vision transformers (ViTs).