Style Transfer

651 papers with code • 2 benchmarks • 17 datasets

Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.

( Image credit: A Neural Algorithm of Artistic Style )

Libraries

Use these libraries to find Style Transfer models and implementations

IGUANe: a 3D generalizable CycleGAN for multicenter harmonization of brain MR images

rocavincent/iguane_harmonization 5 Feb 2024

In MRI studies, the aggregation of imaging data from multiple acquisition sites enhances sample size but may introduce site-related variabilities that hinder consistency in subsequent analyses.

0
05 Feb 2024

ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation Fields

xingy038/conrf 2 Feb 2024

Most of the existing works on arbitrary 3D NeRF style transfer required retraining on each single style condition.

3
02 Feb 2024

Procedural terrain generation with style transfer

fmerizzi/procedural-terrain-generation-with-style-transfer 28 Jan 2024

In this study we introduce a new technique for the generation of terrain maps, exploiting a combination of procedural generation and Neural Style Transfer.

2
28 Jan 2024

CreativeSynth: Creative Blending and Synthesis of Visual Arts based on Multimodal Diffusion

haha-lisa/CreativeSynth 25 Jan 2024

Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.

47
25 Jan 2024

CAT-LLM: Prompting Large Language Models with Text Style Definition for Chinese Article-style Transfer

taozhen1110/cat-llm 11 Jan 2024

Text style transfer is increasingly prominent in online entertainment and social media.

3
11 Jan 2024

Zero Shot Audio to Audio Emotion Transfer With Speaker Disentanglement

iiscleap/zest 9 Jan 2024

The problem of audio-to-audio (A2A) style transfer involves replacing the style features of the source audio with those from the target audio while preserving the content related attributes of the source audio.

17
09 Jan 2024

Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation

happylittlecat2333/Auffusion 2 Jan 2024

Drawing inspiration from state-of-the-art Text-to-Image (T2I) diffusion models, we introduce Auffusion, a TTA system adapting T2I model frameworks to TTA task, by effectively leveraging their inherent generative strengths and precise cross-modal alignment.

117
02 Jan 2024

Balancing the Style-Content Trade-Off in Sentiment Transfer Using Polarity-Aware Denoising

souro/polarity-denoising-sentiment-transfer 22 Dec 2023

Text sentiment transfer aims to flip the sentiment polarity of a sentence (positive to negative or vice versa) while preserving its sentiment-independent content.

0
22 Dec 2023

HyperEditor: Achieving Both Authenticity and Cross-Domain Capability in Image Editing via Hypernetworks

Rainbow0204/HyperEditor 21 Dec 2023

Editing real images authentically while also achieving cross-domain editing remains a challenge.

4
21 Dec 2023

FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning

yeungchenwa/fontdiffuser 19 Dec 2023

Automatic font generation is an imitation task, which aims to create a font library that mimics the style of reference images while preserving the content from source images.

183
19 Dec 2023