Search Results for author: Taehee Kim

Found 8 papers, 2 papers with code

Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-rays

no code implementations14 Feb 2024 Yeongjae Cho, Taehee Kim, Heejun Shin, Sungzoon Cho, Dongmyung Shin

The model is developed using a step-by-step approach, starting with being pretrained on natural images and texts, followed by being trained using longitudinal chest X-ray data.

Language Modelling Question Answering +1

Fast and accurate sparse-view CBCT reconstruction using meta-learned neural attenuation field and hash-encoding regularization

no code implementations4 Dec 2023 Heejun Shin, Taehee Kim, Jongho Lee, Se Young Chun, Seungryung Cho, Dongmyung Shin

In the FACT method, we meta-trained a neural network and a hash-encoder using a few scans (= 15), and a new regularization technique is utilized to reconstruct the details of an anatomical structure.

Correlation-Driven Multi-Level Multimodal Learning for Anomaly Detection on Multiple Energy Sources

no code implementations1 May 2023 Taehee Kim, Hyuk-Yoon Kwon

In this paper, we first propose a method for defining anomalies considering not only individual energy sources but also correlations between them.

Anomaly Detection Time Series Anomaly Detection

Reweighting Strategy based on Synthetic Data Identification for Sentence Similarity

1 code implementation COLING 2022 Taehee Kim, ChaeHun Park, Jimin Hong, Radhika Dua, Edward Choi, Jaegul Choo

To analyze this, we first train a classifier that identifies machine-written sentences, and observe that the linguistic features of the sentences identified as written by a machine are significantly different from those of human-written sentences.

Sentence Sentence Embedding +2

AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain

1 code implementation EMNLP 2021 Jimin Hong, Taehee Kim, Hyesu Lim, Jaegul Choo

During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated.

Language Modelling Transfer Learning

Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning

no code implementations ACL 2021 Cheonbok Park, Yunwon Tae, Taehee Kim, Soyoung Yang, Mohammad Azam Khan, Eunjeong Park, Jaegul Choo

To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small amount of training data.

General Knowledge Meta-Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.