Search Results for author: Junyoung Byun

Found 9 papers, 4 papers with code

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

1 code implementation CVPR 2023 Junyoung Byun, Myung-Joon Kwon, Seungju Cho, Yoonji Kim, Changick Kim

Deep neural networks are widely known to be susceptible to adversarial examples, which can cause incorrect predictions through subtle input modifications.

Improving the Utility of Differentially Private Clustering through Dynamical Processing

no code implementations27 Apr 2023 Junyoung Byun, Yujin Choi, Jaewook Lee

This study aims to alleviate the trade-off between utility and privacy in the task of differentially private clustering.

Clustering

Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

2 code implementations CVPR 2022 Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim

To tackle this limitation, we propose the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class.

Face Verification Image Augmentation +1

Parameter-free HE-friendly Logistic Regression

no code implementations NeurIPS 2021 Junyoung Byun, Woojin Lee, Jaewook Lee

However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data.

BIG-bench Machine Learning Privacy Preserving +1

Geometrically Adaptive Dictionary Attack on Face Recognition

no code implementations8 Nov 2021 Junyoung Byun, Hyojun Go, Changick Kim

We apply the GADA strategy to two existing attack methods and show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.

3D Face Alignment Face Alignment +1

On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks

no code implementations13 Jan 2021 Junyoung Byun, Hyojun Go, Changick Kim

In this paper, we pay attention to an implicit assumption of query-based black-box adversarial attacks that the target model's output exactly corresponds to the query input.

Robust Federated Learning with Noisy Labels

1 code implementation3 Dec 2020 Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim

To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.

Federated Learning Learning with noisy labels

BitNet: Learning-Based Bit-Depth Expansion

1 code implementation10 Oct 2019 Junyoung Byun, Kyujin Shim, Changick Kim

Since insufficient bit-depth may generate annoying false contours or lose detailed visual appearance, bit-depth expansion (BDE) from low bit-depth (LBD) images to high bit-depth (HBD) images becomes more and more important.

Decoder SSIM

Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference

no code implementations25 Sep 2019 Seungjun Jung, Junyoung Byun, Kyujin Shim, Changick Kim

Moreover, by modifying the VQA model’s answer through the output of the NLI model, we show that VQA performance increases by 1. 1% from the original model.

Common Sense Reasoning Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.