no code implementations • COLING (CODI, CRAC) 2022 • Damrin Kim, Seongsik Park, Mirae Han, Harksoo Kim
Therefore, we proceed with the anaphora resolution as a two-stage pipeline model.
no code implementations • 4 Aug 2023 • Jiyong Moon, Junseok Lee, Yunju Lee, Seongsik Park
Therefore, we propose multi-scale patch selection (MSPS) to improve the multi-scale capabilities of existing ViT-based models.
no code implementations • 1 Aug 2023 • Seongsik Park, Jeonghee Jo, Jongkil Park, YeonJoo Jeong, Jaewook Kim, Suyoun Lee, Joon Young Kwak, Inho Kim, Jong-Keuk Park, Kyeong Seok Lee, Gye Weon Hwang, Hyun Jae Jang
Deep spiking neural networks (SNNs) are promising neural networks for their model capacity from deep neural network architecture and energy efficiency from SNNs' operations.
1 code implementation • 14 Mar 2023 • Jiyong Moon, Seongsik Park
One of the key issues in facial expression recognition in the wild (FER-W) is that curating large-scale labeled facial images is challenging due to the inherent complexity and ambiguity of facial images.
no code implementations • CVPR 2022 • Jongwan Kim, Dongjin Lee, Byunggook Na, Seongsik Park, Jeonghee Jo, Sungroh Yoon
In terms of image quality, the LPIPS score improves by up to 12% and the reconstruction speed is 87% higher than that of ET-Net.
1 code implementation • 30 Jan 2022 • Byunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, Sungroh Yoon
We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs.
no code implementations • 23 Oct 2021 • Byunggook Na, Jaehee Jang, Seongsik Park, Seijoon Kim, Joonoo Kim, Moon Sik Jeong, Kwang Choon Kim, Seon Heo, Yoonsang Kim, Sungroh Yoon
We implemented large-batch synchronous training of DNNs based on Caffe, a deep learning library.
no code implementations • 20 Jul 2021 • Seongsik Park, Harksoo Kim
Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.
Ranked #1 on Relation Extraction on Re-TACRED
no code implementations • 14 Jun 2021 • Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon
On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.
no code implementations • 4 Jun 2021 • Seongsik Park, Sungroh Yoon
With TTFS coding, each neuron generates one spike at most, which leads to a significant improvement in energy efficiency.
no code implementations • 22 Apr 2021 • Seongsik Park, Dongjin Lee, Sungroh Yoon
Spiking neural networks (SNNs) have emerged as energy-efficient neural networks with temporal information.
no code implementations • 5 Mar 2021 • Seongsik Park, Harksoo Kim
The proposed model finds n-to-1 subject-object relations using a forward object decoder.
Ranked #1 on Relation Extraction on ACE 2005 (Relation classification F1 metric)
no code implementations • 26 Mar 2020 • Seongsik Park, Seijoon Kim, Byunggook Na, Sungroh Yoon
Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems.
no code implementations • 12 Mar 2019 • Seijoon Kim, Seongsik Park, Byunggook Na, Sungroh Yoon
Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications.
no code implementations • 10 Sep 2018 • Seongsik Park, Seijoon Kim, Hyeokjun Choe, Sungroh Yoon
The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energy efficient computing capability.
no code implementations • 21 May 2018 • Seongsik Park, Jaehee Jang, Seijoon Kim, Sungroh Yoon
Memory-augmented neural networks (MANNs) are designed for question-answering tasks.
no code implementations • 10 Nov 2017 • Seongsik Park, Seijoon Kim, Seil Lee, Ho Bae, Sungroh Yoon
In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge.
no code implementations • 8 Nov 2016 • Seongsik Park, Sang-gil Lee, Hyunha Nam, Sungroh Yoon
In order to eliminate this workaround, recently proposed is a new class of SNN named deep spiking networks (DSNs), which can be trained directly (without a mapping from conventional deep networks) by error backpropagation with stochastic gradient descent.
no code implementations • 6 Oct 2016 • Hyeokjun Choe, Seil Lee, Hyunha Nam, Seongsik Park, Seijoon Kim, Eui-Young Chung, Sungroh Yoon
The second is the popularity of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing.