Denoising
1902 papers with code • 5 benchmarks • 20 datasets
Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.
( Image credit: Beyond a Gaussian Denoiser )
Libraries
Use these libraries to find Denoising models and implementationsSubtasks
Latest papers
NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset
Despite the significant progress in image denoising, it is still challenging to restore fine-scale details while removing noise, especially in extremely low-light environments.
Taming Stable Diffusion for Text to 360° Panorama Image Generation
Generative models, e. g., Stable Diffusion, have enabled the creation of photorealistic images from text prompts.
TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising
For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.
ConsistencyDet: A Robust Object Detector with a Denoising Paradigm of Consistency Model
In the present study, we introduce a novel framework designed to articulate object detection as a denoising diffusion process, which operates on the perturbed bounding boxes of annotated entities.
Masked Modeling Duo: Towards a Universal Audio Pre-training Framework
This study proposes Masked Modeling Duo (M2D), an improved masked prediction SSL, which learns by predicting representations of masked input signals that serve as training signals.
scRDiT: Generating single-cell RNA-seq data by diffusion transformers and accelerating sampling
The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs).
Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance
Classifier-Free Guidance (CFG) has been widely used in text-to-image diffusion models, where the CFG scale is introduced to control the strength of text guidance on the whole image space.
Taming Transformers for Realistic Lidar Point Cloud Generation
Diffusion Models (DMs) have achieved State-Of-The-Art (SOTA) results in the Lidar point cloud generation task, benefiting from their stable training and iterative refinement during sampling.
Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models
To address this issue, we propose Gaussian Shading, a diffusion model watermarking technique that is both performance-lossless and training-free, while serving the dual purpose of copyright protection and tracing of offending content.
Dual-Scale Transformer for Large-Scale Single-Pixel Imaging
In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras.