Denoising

1902 papers with code • 5 benchmarks • 20 datasets

Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.

( Image credit: Beyond a Gaussian Denoiser )

Libraries

Use these libraries to find Denoising models and implementations

NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset

ronjonxu/naid 12 Apr 2024

Despite the significant progress in image denoising, it is still challenging to restore fine-scale details while removing noise, especially in extremely low-light environments.

9
12 Apr 2024

Taming Stable Diffusion for Text to 360° Panorama Image Generation

faceonlive/ai-research 11 Apr 2024

Generative models, e. g., Stable Diffusion, have enabled the creation of photorealistic images from text prompts.

159
11 Apr 2024

TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising

faceonlive/ai-research 11 Apr 2024

For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.

159
11 Apr 2024

ConsistencyDet: A Robust Object Detector with a Denoising Paradigm of Consistency Model

tankowa/consistencydet 11 Apr 2024

In the present study, we introduce a novel framework designed to articulate object detection as a denoising diffusion process, which operates on the perturbed bounding boxes of annotated entities.

1
11 Apr 2024

Masked Modeling Duo: Towards a Universal Audio Pre-training Framework

faceonlive/ai-research 9 Apr 2024

This study proposes Masked Modeling Duo (M2D), an improved masked prediction SSL, which learns by predicting representations of masked input signals that serve as training signals.

159
09 Apr 2024

scRDiT: Generating single-cell RNA-seq data by diffusion transformers and accelerating sampling

faceonlive/ai-research 9 Apr 2024

The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs).

159
09 Apr 2024

Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance

faceonlive/ai-research 8 Apr 2024

Classifier-Free Guidance (CFG) has been widely used in text-to-image diffusion models, where the CFG scale is introduced to control the strength of text guidance on the whole image space.

159
08 Apr 2024

Taming Transformers for Realistic Lidar Point Cloud Generation

faceonlive/ai-research 8 Apr 2024

Diffusion Models (DMs) have achieved State-Of-The-Art (SOTA) results in the Lidar point cloud generation task, benefiting from their stable training and iterative refinement during sampling.

159
08 Apr 2024

Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models

bsmhmmlf/Gaussian-Shading 7 Apr 2024

To address this issue, we propose Gaussian Shading, a diffusion model watermarking technique that is both performance-lossless and training-free, while serving the dual purpose of copyright protection and tracing of offending content.

11
07 Apr 2024

Dual-Scale Transformer for Large-Scale Single-Pixel Imaging

gang-qu/hatnet-spi 7 Apr 2024

In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras.

4
07 Apr 2024