Search Results for author: Ho Bae

Found 9 papers, 3 papers with code

FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models

1 code implementation5 Mar 2024 Younghan Lee, Yungi Cho, Woorim Han, Ho Bae, Yunheung Paek

However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients.

Contrastive Learning Federated Learning +1

DAFA: Distance-Aware Fair Adversarial Training

1 code implementation23 Jan 2024 Hyungyu Lee, Saehyung Lee, Hyemi Jang, Junsung Park, Ho Bae, Sungroh Yoon

The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem.

Fairness

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

1 code implementation17 Feb 2023 Dahuin Jung, Dongjin Lee, Sunwon Hong, Hyemi Jang, Ho Bae, Sungroh Yoon

The aim of continual learning is to learn new tasks continuously (i. e., plasticity) without forgetting previously learned knowledge from old tasks (i. e., stability).

Continual Learning

PixelSteganalysis: Pixel-wise Hidden Information Removal with Low Visual Degradation

no code implementations28 Feb 2019 Dahuin Jung, Ho Bae, Hyun-Soo Choi, Sungroh Yoon

We propose a DL based steganalysis technique that effectively removes secret images by restoring the distribution of the original images.

Steganalysis

AnomiGAN: Generative adversarial networks for anonymizing private medical data

no code implementations31 Jan 2019 Ho Bae, Dahuin Jung, Sungroh Yoon

We compared our method to state-of-the-art techniques and observed that our method preserves the same level of privacy as differential privacy (DP), but had better prediction results.

Security and Privacy Issues in Deep Learning

no code implementations31 Jul 2018 Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon

Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.

Quantized Memory-Augmented Neural Networks

no code implementations10 Nov 2017 Seongsik Park, Seijoon Kim, Seil Lee, Ho Bae, Sungroh Yoon

In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge.

Quantization

DNA Steganalysis Using Deep Recurrent Neural Networks

no code implementations27 Apr 2017 Ho Bae, Byunghan Lee, Sunyoung Kwon, Sungroh Yoon

We compare our proposed method to various existing methods and biological sequence analysis methods implemented on top of our framework.

Steganalysis

Cannot find the paper you are looking for? You can Submit a new open access paper.