Search Results for author: Jaejin Cho

Found 14 papers, 7 papers with code

NLCG-Net: A Model-Based Zero-Shot Learning Framework for Undersampled Quantitative MRI Reconstruction

1 code implementation22 Jan 2024 Xinrui Jiang, Yohan Jun, Jaejin Cho, Mengze Gao, Xingwang Yong, Berkin Bilgic

Typical quantitative MRI (qMRI) methods estimate parameter maps after image reconstructing, which is prone to biases and error propagation.

MRI Reconstruction Zero-Shot Learning

Improved Multi-Shot Diffusion-Weighted MRI with Zero-Shot Self-Supervised Learning Reconstruction

1 code implementation9 Aug 2023 Jaejin Cho, Yohan Jun, Xiaoqing Wang, Caique Kobayashi, Berkin Bilgic

In this study, we introduce a novel msEPI reconstruction approach called zero-MIRID (zero-shot self-supervised learning of Multi-shot Image Reconstruction for Improved Diffusion MRI).

Image Reconstruction Self-Supervised Learning

Zero-DeepSub: Zero-Shot Deep Subspace Reconstruction for Rapid Multiparametric Quantitative MRI Using 3D-QALAS

1 code implementation4 Jul 2023 Yohan Jun, Yamin Arefeen, Jaejin Cho, Shohei Fujita, Xiaoqing Wang, P. Ellen Grant, Borjan Gagoski, Camilo Jaimes, Michael S. Gee, Berkin Bilgic

Using an ISMRM/NIST system phantom, the accuracy and reproducibility of the T1 and T2 maps estimated using the proposed methods were evaluated by comparing them with reference techniques.

SSL-QALAS: Self-Supervised Learning for Rapid Multiparameter Estimation in Quantitative MRI Using 3D-QALAS

no code implementations28 Feb 2023 Yohan Jun, Jaejin Cho, Xiaoqing Wang, Michael Gee, P. Ellen Grant, Berkin Bilgic, Borjan Gagoski

Conclusion: The proposed SSL-QALAS method enabled rapid reconstruction of multiparametric maps from 3D-QALAS measurements without an external dictionary or labeled ground-truth training data.

Self-Supervised Learning Transfer Learning

3D-EPI Blip-Up/Down Acquisition (BUDA) with CAIPI and Joint Hankel Structured Low-Rank Reconstruction for Rapid Distortion-Free High-Resolution T2* Mapping

no code implementations1 Dec 2022 Zhifeng Chen, Congyu Liao, Xiaozhi Cao, Benedikt A. Poser, Zhongbiao Xu, Wei-Ching Lo, Manyi Wen, Jaejin Cho, Qiyuan Tian, Yaohui Wang, Yanqiu Feng, Ling Xia, Wufan Chen, Feng Liu, Berkin Bilgic

Purpose: This work aims to develop a novel distortion-free 3D-EPI acquisition and image reconstruction technique for fast and robust, high-resolution, whole-brain imaging as well as quantitative T2* mapping.

Image Reconstruction

Wave-Encoded Model-based Deep Learning for Highly Accelerated Imaging with Joint Reconstruction

1 code implementation6 Feb 2022 Jaejin Cho, Borjan Gagoski, Taehyung Kim, Qiyuan Tian, Stephen Robert Frost, Itthi Chatnuntawech, Berkin Bilgic

Purpose: To propose a wave-encoded model-based deep learning (wave-MoDL) strategy for highly accelerated 3D imaging and joint multi-contrast image reconstruction, and further extend this to enable rapid quantitative imaging using an interleaved look-locker acquisition sequence with T2 preparation pulse (3D-QALAS).

Image Reconstruction

Highly Accelerated EPI with Wave Encoding and Multi-shot Simultaneous Multi-Slice Imaging

1 code implementation3 Jun 2021 Jaejin Cho, Congyu Liao, Qiyuan Tian, Zijing Zhang, Jinmin Xu, Wei-Ching Lo, Benedikt A. Poser, V. Andrew Stenger, Jason Stockmann, Kawin Setsompop, Berkin Bilgic

We introduce wave encoded acquisition and reconstruction techniques for highly accelerated echo planar imaging (EPI) with reduced g-factor penalty and image artifacts.

Scan Specific Artifact Reduction in K-space (SPARK) Neural Networks Synergize with Physics-based Reconstruction to Accelerate MRI

no code implementations2 Apr 2021 Yamin Arefeen, Onur Beker, Jaejin Cho, Heng Yu, Elfar Adalsteinsson, Berkin Bilgic

Conclusion: SPARK synergizes with physics-based acquisition and reconstruction techniques to improve accelerated MRI by training scan-specific models to estimate and correct reconstruction errors in k-space.

Learning Speaker Embedding from Text-to-Speech

1 code implementation21 Oct 2020 Jaejin Cho, Piotr Zelasko, Jesus Villalba, Shinji Watanabe, Najim Dehak

TTS with speaker classification loss improved EER by 0. 28\% and 0. 73\% absolutely from a model using only speaker classification loss in LibriTTS and Voxceleb1 respectively.

Classification General Classification +2

Transfer learning of language-independent end-to-end ASR with language model fusion

no code implementations6 Nov 2018 Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe

This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning.

Language Modelling Transfer Learning

Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling

no code implementations4 Oct 2018 Jaejin Cho, Murali Karthick Baskar, Ruizhi Li, Matthew Wiesner, Sri Harish Mallidi, Nelson Yalta, Martin Karafiat, Shinji Watanabe, Takaaki Hori

In this work, we attempt to use data from 10 BABEL languages to build a multi-lingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach.

Language Modelling Sequence-To-Sequence Speech Recognition +2

End-to-end Speech Recognition with Word-based RNN Language Models

no code implementations8 Aug 2018 Takaaki Hori, Jaejin Cho, Shinji Watanabe

This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.