Search Results for author: Jaewon Jung

Found 6 papers, 4 papers with code

Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

1 code implementation11 Mar 2024 Hongsun Jang, Jaeyong Song, Jaewon Jung, Jaeyoung Park, Youngsok Kim, Jinho Lee

Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system.

Language Modelling Large Language Model

PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor

1 code implementation11 Mar 2024 Jaewon Jung, Hongsun Jang, Jaeyong Song, Jinho Lee

In this situation, adversarial distillation is a promising option which aims to distill the robustness of the teacher network to improve the robustness of a small student network.

Adversarial Robustness

GraNNDis: Efficient Unified Distributed Training Framework for Deep GNNs on Large Clusters

no code implementations12 Nov 2023 Jaeyong Song, Hongsun Jang, Jaewon Jung, Youngsok Kim, Jinho Lee

According to the growth in the dataset and the model size used for GNNs, an important problem is that it becomes nearly impossible to keep the whole network on GPU memory.

Pipe-BD: Pipelined Parallel Blockwise Distillation

1 code implementation29 Jan 2023 Hongsun Jang, Jaewon Jung, Jaeyong Song, Joonsang Yu, Youngsok Kim, Jinho Lee

However, this results in a high overhead of redundant teacher execution, low GPU utilization, and extra data loading.

Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression

no code implementations24 Jan 2023 Jaeyong Song, Jinkyu Yim, Jaewon Jung, Hongsun Jang, Hyung-Jin Kim, Youngsok Kim, Jinho Lee

Compressing the communication is one way to mitigate the overhead by reducing the inter-node traffic volume; however, the existing compression techniques have critical limitations to be applied for NLP models with 3D parallelism in that 1) only the data parallelism traffic is targeted, and 2) the existing compression schemes already harm the model quality too much.

Visual Relationship Detection with Language prior and Softmax

1 code implementation16 Apr 2019 Jaewon Jung, Jongyoul Park

Visual relationship detection is an intermediate image understanding task that detects two objects and classifies a predicate that explains the relationship between two objects in an image.

Knowledge Distillation Relationship Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.