Style Transfer

650 papers with code • 2 benchmarks • 17 datasets

Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.

( Image credit: A Neural Algorithm of Artistic Style )

Libraries

Use these libraries to find Style Transfer models and implementations

Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt

jamie-cheung/lsast 17 Apr 2024

To address the above problems, we propose a novel pre-trained diffusion-based artistic style transfer method, called LSAST, which can generate highly realistic artistic stylized images while preserving the content structure of input content images well, without bringing obvious artifacts and disharmonious style patterns.

0
17 Apr 2024

Red-Teaming Segment Anything Model

jankowskichristopher/red-teaming-segment-anything-model 2 Apr 2024

Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.

1
02 Apr 2024

StainFuser: Controlling Diffusion for Faster Neural Style Transfer in Multi-Gigapixel Histology Images

r-j96/stainfuser 14 Mar 2024

Stain normalization algorithms aim to transform the color and intensity characteristics of a source multi-gigapixel histology image to match those of a target image, mitigating inconsistencies in the appearance of stains used to highlight cellular components in the images.

11
14 Mar 2024

Authorship Style Transfer with Policy Optimization

isi-nlp/astrapop 12 Mar 2024

Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source.

1
12 Mar 2024

MoST: Motion Style Transformer between Diverse Action Contents

boeun-kim/most 10 Mar 2024

While existing motion style transfer methods are effective between two motions with identical content, their performance significantly diminishes when transferring style between motions with different contents.

8
10 Mar 2024

Doubly Abductive Counterfactual Inference for Text-based Image Editing

xuesong39/dac 5 Mar 2024

Through the lens of the formulation, we find that the crux of TBIE is that existing techniques hardly achieve a good trade-off between editability and fidelity, mainly due to the overfitting of the single-image fine-tuning.

10
05 Mar 2024

Misalignment-Robust Frequency Distribution Loss for Image Transformation

eezkni/fdl 28 Feb 2024

This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution, which heavily rely on precisely aligned paired datasets with pixel-level alignments.

13
28 Feb 2024

Counterfactual Generation with Identifiability Guarantees

hanqi-qi/matte NeurIPS 2023

In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task.

3
23 Feb 2024

Visual Style Prompting with Swapping Self-Attention

naver-ai/Visual-Style-Prompting 20 Feb 2024

Despite their remarkable capability, existing models still face challenges in achieving controlled generation with a consistent style, requiring costly fine-tuning or often inadequately transferring the visual elements due to content leakage.

334
20 Feb 2024

UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models

optml-group/unlearncanvas 19 Feb 2024

The rapid advancement of diffusion models (DMs) has not only transformed various real-world industries but has also introduced negative societal concerns, including the generation of harmful content, copyright disputes, and the rise of stereotypes and biases.

24
19 Feb 2024