Search Results for author: Jeongsol Kim

Found 11 papers, 4 papers with code

Generalized Consistency Trajectory Models for Image Manipulation

1 code implementation19 Mar 2024 Beomsu Kim, JaeMin Kim, Jeongsol Kim, Jong Chul Ye

Diffusion-based generative models excel in unconditional generation, as well as on applied tasks such as image editing and restoration.

Denoising Image Manipulation +2

DreamSampler: Unifying Diffusion Sampling and Score Distillation for Image Manipulation

no code implementations18 Mar 2024 Jeongsol Kim, Geon Yeong Park, Jong Chul Ye

Reverse sampling and score-distillation have emerged as main workhorses in recent years for image manipulation using latent diffusion models (LDMs).

Feature Engineering Image Manipulation

Regularization by Texts for Latent Diffusion Inverse Solvers

no code implementations27 Nov 2023 Jeongsol Kim, Geon Yeong Park, Hyungjin Chung, Jong Chul Ye

The recent advent of diffusion models has led to significant progress in solving inverse problems, leveraging these models as effective generative priors.

Negation

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models

1 code implementation NeurIPS 2023 Geon Yeong Park, Jeongsol Kim, Beomsu Kim, Sang Wan Lee, Jong Chul Ye

Despite the remarkable performance of text-to-image diffusion models in image generation tasks, recent studies have raised the issue that generated images sometimes cannot capture the intended semantic contents of the text prompts, which phenomenon is often called semantic misalignment.

Denoising Image Inpainting

Parallel Diffusion Models of Operator and Image for Blind Inverse Problems

no code implementations CVPR 2023 Hyungjin Chung, Jeongsol Kim, Sehui Kim, Jong Chul Ye

We show the efficacy of our method on two representative tasks -- blind deblurring, and imaging through turbulence -- and show that our method yields state-of-the-art performance, while also being flexible to be applicable to general blind inverse problems when we know the functional forms.

Deblurring

Diffusion Posterior Sampling for General Noisy Inverse Problems

2 code implementations29 Sep 2022 Hyungjin Chung, Jeongsol Kim, Michael T. McCann, Marc L. Klasky, Jong Chul Ye

Diffusion models have been recently studied as powerful generative inverse problem solvers, owing to their high quality reconstructions and the ease of combining existing iterative solvers.

Deblurring Retrieval

Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis

no code implementations NeurIPS 2021 Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.

COVID-19 Diagnosis Federated Learning

Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

no code implementations2 Nov 2021 Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.

COVID-19 Diagnosis Federated Learning

Privacy-preserving Task-Agnostic Vision Transformer for Image Processing

1 code implementation29 Sep 2021 Boah Kim, Jeongsol Kim, Jong Chul Ye

Inspired by the recent success of Vision Transformer (ViT), here we present a new distributed learning framework for image processing applications, allowing clients to learn multiple tasks with their private data.

Multi-Task Learning Privacy Preserving

Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis

no code implementations NeurIPS 2021 Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.

COVID-19 Diagnosis Federated Learning

Optimal Transport driven CycleGAN for Unsupervised Learning in Inverse Problems

no code implementations25 Sep 2019 Byeongsu Sim, Gyutaek Oh, Jeongsol Kim, Chanyong Jung, Jong Chul Ye

To improve the performance of classical generative adversarial network (GAN), Wasserstein generative adversarial networks (W-GAN) was developed as a Kantorovich dual formulation of the optimal transport (OT) problem using Wasserstein-1 distance.

Computed Tomography (CT) Generative Adversarial Network +1

Cannot find the paper you are looking for? You can Submit a new open access paper.