Search Results for author: TaeYoung Kim

Found 13 papers, 0 papers with code

Neural Operators Learn the Local Physics of Magnetohydrodynamics

no code implementations24 Apr 2024 TaeYoung Kim, Youngsoo Ha, Myungjoo Kang

Magnetohydrodynamics (MHD) plays a pivotal role in describing the dynamics of plasma and conductive fluids, essential for understanding phenomena such as the structure and evolution of stars and galaxies, and in nuclear fusion for plasma motion through ideal MHD equations.

Approximating Numerical Fluxes Using Fourier Neural Operators for Hyperbolic Conservation Laws

no code implementations3 Jan 2024 TaeYoung Kim, Myungjoo Kang

To this end, we developed loss functions inspired by established numerical schemes related to conservation laws and approximated numerical fluxes using Fourier neural operators (FNOs).

An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network

no code implementations6 Dec 2023 TaeYoung Kim, Hongseok Yang

The recent theoretical analysis of deep neural networks in their infinite-width limits has deepened our understanding of initialisation, feature learning, and training of those networks, and brought new practical techniques for finding appropriate hyperparameters, learning network weights, and performing inference.

Virtual Action Actor-Critic Framework for Exploration (Student Abstract)

no code implementations6 Nov 2023 Bumgeun Park, TaeYoung Kim, Quoc-Vinh Lai-Dang, Dongsoo Har

In this paper, a novel actor-critic framework namely virtual action actor-critic (VAAC), is proposed to address the challenge of efficient exploration in RL.

Efficient Exploration Reinforcement Learning (RL)

Enhanced Transformer Architecture for Natural Language Processing

no code implementations17 Oct 2023 Woohyeon Moon, TaeYoung Kim, Bumgeun Park, Dongsoo Har

Transformer is a state-of-the-art model in the field of natural language processing (NLP).

Translation

Road Redesign Technique Achieving Enhanced Road Safety by Inpainting with a Diffusion Model

no code implementations15 Feb 2023 Sumit Mishra, Medhavi Mishra, TaeYoung Kim, Dongsoo Har

Image inpainting is based on inpainting safe roadway elements in a roadway image, replacing accident-prone (AP) features by using a diffusion model.

Image Inpainting

Off-Policy Reinforcement Learning with Loss Function Weighted by Temporal Difference Error

no code implementations26 Dec 2022 Bumgeun Park, TaeYoung Kim, Woohyeon Moon, Luiz Felipe Vecchietti, Dongsoo Har

We propose a novel method that introduces a weighting factor for each experience when calculating the loss function at the learning stage.

OpenAI Gym reinforcement-learning +1

Kick-motion Training with DQN in AI Soccer Environment

no code implementations1 Dec 2022 Bumgeun Park, Jihui Lee, TaeYoung Kim, Dongsoo Har

In this paper, we attempt to use the relative coordinate system (RCS) as the state for training kick-motion of robot agent, instead of using the absolute coordinate system (ACS).

Reinforcement Learning (RL)

Bounding the Rademacher Complexity of Fourier neural operators

no code implementations12 Sep 2022 TaeYoung Kim, Myungjoo Kang

Using capacity based on these norms, we bound the generalization error of the model.

Cluster-based Sampling in Hindsight Experience Replay for Robotic Tasks (Student Abstract)

no code implementations31 Aug 2022 TaeYoung Kim, Dongsoo Har

The proposed sampling strategy groups episodes with different achieved goals by using a cluster model and samples experiences in the manner of HER to create the training batch.

Clustering Multi-Goal Reinforcement Learning +1

Path Planning of Cleaning Robot with Reinforcement Learning

no code implementations17 Aug 2022 Woohyeon Moon, Bumgeun Park, Sarvar Hussain Nengroo, TaeYoung Kim, Dongsoo Har

To solve this electricity consumption issue, the problem of efficient path planning for cleaning robot has become important and many studies have been conducted.

reinforcement-learning Reinforcement Learning (RL) +1

Two-stage training algorithm for AI robot soccer

no code implementations13 Apr 2021 TaeYoung Kim, Luiz Felipe Vecchietti, Kyujin Choi, Sanem Sariel, Dongsoo Har

Because these two training processes are conducted in a series in every timestep, agents can learn how to maximize role rewards and team rewards simultaneously.

Multi-agent Reinforcement Learning reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.