no code implementations • 27 Apr 2020 • Youngnam Lee, Byung-soo Kim, Dongmin Shin, JungHoon Kim, Jineon Baek, Jinhwan Lee, Youngduck Choi
To that end, we apply a state-of-the-art deep attentive neural network-based score prediction model to Santa, a multi-platform English ITS with approximately 780K users in South Korea that exclusively focuses on the TOEIC (Test of English for International Communications) standardized examinations.
no code implementations • 14 Feb 2020 • Youngnam Lee, Dongmin Shin, HyunBin Loh, Jaemin Lee, Piljae Chae, Junghyun Cho, Seoyon Park, Jinhwan Lee, Jineon Baek, Byung-soo Kim, Youngduck Choi
First, we define the concept of the study session, study session dropout and study session dropout prediction task in a mobile learning environment.
5 code implementations • 14 Feb 2020 • Youngduck Choi, Youngnam Lee, Junghyun Cho, Jineon Baek, Byung-soo Kim, Yeongmin Cha, Dongmin Shin, Chan Bae, Jaewe Heo
To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately.
Ranked #2 on Knowledge Tracing on EdNet
no code implementations • 1 Jan 2020 • Youngduck Choi, Youngnam Lee, Junghyun Cho, Jineon Baek, Dongmin Shin, Hangyeol Yu, Yugeun Shim, Seewoo Lee, JongHun Shin, Chan Bae, Byungsoo Kim, Jaewe Heo
However, such methods fail to utilize the full range of student interaction data available and do not model student learning behavior.
1 code implementation • 6 Dec 2019 • Youngduck Choi, Youngnam Lee, Dongmin Shin, Junghyun Cho, Seoyon Park, Seewoo Lee, Jineon Baek, Chan Bae, Byung-soo Kim, Jaewe Heo
With advances in Artificial Intelligence in Education (AIEd) and the ever-growing scale of Interactive Educational Systems (IESs), data-driven approach has become a common recipe for various tasks such as knowledge tracing and learning path recommendation.
2 code implementations • 26 Jun 2019 • Youngnam Lee, Youngduck Choi, Junghyun Cho, Alexander R. Fabbri, HyunBin Loh, Chanyou Hwang, Yongku Lee, Sang-Wook Kim, Dragomir Radev
Our model outperforms existing approaches over several metrics in predicting user response correctness, notably out-performing other methods on new users without large question-response histories.