Denoising
1948 papers with code • 5 benchmarks • 20 datasets
Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.
( Image credit: Beyond a Gaussian Denoiser )
Libraries
Use these libraries to find Denoising models and implementationsSubtasks
Latest papers
EchoScene: Indoor Scene Generation via Information Echo over Scene Graph Diffusion
The scheme ensures that the denoising processes are influenced by a holistic understanding of the scene graph, facilitating the generation of globally coherent scenes.
SSUMamba: Spatial-Spectral Selective State Space Model for Hyperspectral Image Denoising
The SSUMamba can exploit complete global spatial-spectral correlation within a module thanks to the linear space complexity in State Space Model (SSM) computations.
A text-based, generative deep learning model for soil reflectance spectrum simulation in the VIS-NIR (400-2499 nm) bands
To address this, a fully data-driven soil optics generative model (SOGM) for simulation of soil reflectance spectra based on soil property inputs was developed.
Invariant Risk Minimization Is A Total Variation Model
Invariant risk minimization (IRM) is an arising approach to generalize invariant features to different environments in machine learning.
Advancing low-field MRI with a universal denoising imaging transformer: Towards fast and high-quality imaging
Recent developments in low-field (LF) magnetic resonance imaging (MRI) systems present remarkable opportunities for affordable and widespread MRI access.
TheaterGen: Character Management with LLM for Consistent Multi-turn Image Generation
To address this issue, we introduce TheaterGen, a training-free framework that integrates large language models (LLMs) and text-to-image (T2I) models to provide the capability of multi-turn image generation.
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models
To guide video generation with the additional image input, we propose a "repeat-and-slide" strategy that modulates the reverse denoising process, allowing the frozen diffusion model to synthesize a video frame-by-frame starting from the provided image.
Denoising: from classical methods to deep CNNs
This paper aims to explore the evolution of image denoising in a pedagological way.
CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method
Transforming large pre-trained low-resolution diffusion models to cater to higher-resolution demands, i. e., diffusion extrapolation, significantly improves diffusion adaptability.
A Comprehensive Survey for Hyperspectral Image Classification: The Evolution from Conventional to Transformers
Traditional approaches encounter the curse of dimensionality, struggle with feature selection and extraction, lack spatial information consideration, exhibit limited robustness to noise, face scalability issues, and may not adapt well to complex data distributions.