Style Transfer

645 papers with code • 2 benchmarks • 17 datasets

Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.

( Image credit: A Neural Algorithm of Artistic Style )

Libraries

Use these libraries to find Style Transfer models and implementations

Latest papers with no code

DiffStyler: Diffusion-based Localized Image Style Transfer

no code yet • 27 Mar 2024

Image style transfer aims to imbue digital imagery with the distinctive attributes of style targets, such as colors, brushstrokes, shapes, whilst concurrently preserving the semantic integrity of the content.

AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks

no code yet • 21 Mar 2024

In the second stage, AnyV2V can plug in any existing image-to-video models to perform DDIM inversion and intermediate feature injection to maintain the appearance and motion consistency with the source video.

Implicit Style-Content Separation using B-LoRA

no code yet • 21 Mar 2024

In this paper, we introduce B-LoRA, a method that leverages LoRA (Low-Rank Adaptation) to implicitly separate the style and content components of a single image, facilitating various image stylization tasks.

Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking

no code yet • 21 Mar 2024

In Virtual Reality (VR), adversarial attack remains a significant security threat.

Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques

no code yet • 20 Mar 2024

The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity.

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code yet • 18 Mar 2024

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model

no code yet • 18 Mar 2024

Specifically, an inter-layer attention module is designed to encourage information exchange and learning between layers, while a text-guided intra-layer attention module incorporates layer-specific prompts to direct the specific-content generation for each layer.

Efficient Domain Adaptation for Endoscopic Visual Odometry

no code yet • 16 Mar 2024

In this work, an efficient neural style transfer framework for endoscopic visual odometry is proposed, which compresses the time from pre-operative planning to testing phase to less than five minutes.

Could We Generate Cytology Images from Histopathology Images? An Empirical Study

no code yet • 16 Mar 2024

Automation in medical imaging is quite challenging due to the unavailability of annotated datasets and the scarcity of domain experts.

A survey of synthetic data augmentation methods in computer vision

no code yet • 15 Mar 2024

Since this is the first paper to explore synthetic data augmentation methods in great detail, we are hoping to equip readers with the necessary background information and in-depth knowledge of existing methods and their attendant issues.