Style Transfer
650 papers with code • 2 benchmarks • 17 datasets
Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.
( Image credit: A Neural Algorithm of Artistic Style )
Libraries
Use these libraries to find Style Transfer models and implementationsDatasets
Subtasks
Latest papers
Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt
To address the above problems, we propose a novel pre-trained diffusion-based artistic style transfer method, called LSAST, which can generate highly realistic artistic stylized images while preserving the content structure of input content images well, without bringing obvious artifacts and disharmonious style patterns.
Red-Teaming Segment Anything Model
Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.
StainFuser: Controlling Diffusion for Faster Neural Style Transfer in Multi-Gigapixel Histology Images
Stain normalization algorithms aim to transform the color and intensity characteristics of a source multi-gigapixel histology image to match those of a target image, mitigating inconsistencies in the appearance of stains used to highlight cellular components in the images.
Authorship Style Transfer with Policy Optimization
Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source.
MoST: Motion Style Transformer between Diverse Action Contents
While existing motion style transfer methods are effective between two motions with identical content, their performance significantly diminishes when transferring style between motions with different contents.
Doubly Abductive Counterfactual Inference for Text-based Image Editing
Through the lens of the formulation, we find that the crux of TBIE is that existing techniques hardly achieve a good trade-off between editability and fidelity, mainly due to the overfitting of the single-image fine-tuning.
Misalignment-Robust Frequency Distribution Loss for Image Transformation
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution, which heavily rely on precisely aligned paired datasets with pixel-level alignments.
Counterfactual Generation with Identifiability Guarantees
In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task.
Visual Style Prompting with Swapping Self-Attention
Despite their remarkable capability, existing models still face challenges in achieving controlled generation with a consistent style, requiring costly fine-tuning or often inadequately transferring the visual elements due to content leakage.
UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models
The rapid advancement of diffusion models (DMs) has not only transformed various real-world industries but has also introduced negative societal concerns, including the generation of harmful content, copyright disputes, and the rise of stereotypes and biases.