Search Results for author: Dohyun Kwon

Found 8 papers, 1 papers with code

On the Complexity of First-Order Methods in Stochastic Bilevel Optimization

no code implementations11 Feb 2024 Jeongyeol Kwon, Dohyun Kwon, Hanbaek Lyu

We study the complexity of finding stationary points with such an $y^*$-aware oracle: we propose a simple first-order method that converges to an $\epsilon$ stationary point using $O(\epsilon^{-6}), O(\epsilon^{-4})$ access to first-order $y^*$-aware oracles.

Bilevel Optimization

Generalized Contrastive Divergence: Joint Training of Energy-Based Model and Diffusion Model through Inverse Reinforcement Learning

no code implementations6 Dec 2023 Sangwoong Yoon, Dohyun Kwon, Himchan Hwang, Yung-Kyun Noh, Frank C. Park

We present Generalized Contrastive Divergence (GCD), a novel objective function for training an energy-based model (EBM) and a sampler simultaneously.

On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation

no code implementations4 Sep 2023 Jeongyeol Kwon, Dohyun Kwon, Stephen Wright, Robert Nowak

When the perturbed lower-level problem uniformly satisfies the small-error proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\epsilon$-stationary point of the penalty function, using in total $O(\epsilon^{-3})$ and $O(\epsilon^{-7})$ accesses to first-order (stochastic) gradient oracles when the oracle is deterministic and oracles are noisy, respectively.

Bilevel Optimization

Complexity of Block Coordinate Descent with Proximal Regularization and Applications to Wasserstein CP-dictionary Learning

no code implementations4 Jun 2023 Dohyun Kwon, Hanbaek Lyu

We consider the block coordinate descent methods of Gauss-Seidel type with proximal regularization (BCD-PR), which is a classical method of minimizing general nonconvex objectives under constraints that has a wide range of practical applications.

Dictionary Learning

A Fully First-Order Method for Stochastic Bilevel Optimization

no code implementations26 Jan 2023 Jeongyeol Kwon, Dohyun Kwon, Stephen Wright, Robert Nowak

Specifically, we show that F2SA converges to an $\epsilon$-stationary solution of the bilevel problem after $\epsilon^{-7/2}, \epsilon^{-5/2}$, and $\epsilon^{-3/2}$ iterations (each iteration using $O(1)$ samples) when stochastic noises are in both level objectives, only in the upper-level objective, and not present (deterministic settings), respectively.

Bilevel Optimization

Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance

1 code implementation13 Dec 2022 Dohyun Kwon, Ying Fan, Kangwook Lee

Specifically, we prove that the Wasserstein distance is upper bounded by the square root of the objective function up to multiplicative constants and a fixed constant offset.

Audio Synthesis Image Generation

Training Wasserstein GANs without gradient penalties

no code implementations27 Oct 2021 Dohyun Kwon, Yeoneung Kim, Guido Montúfar, Insoon Yang

We propose a stable method to train Wasserstein generative adversarial networks.

Multi-Agent Deep Reinforcement Learning for Cooperative Connected Vehicles

no code implementations8 Jan 2020 Dohyun Kwon, Joongheon Kim

Millimeter-wave (mmWave) base station can offer abundant high capacity channel resources toward connected vehicles so that quality-of-service (QoS) of them in terms of downlink throughput can be highly improved.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.