no code implementations • 9 Feb 2024 • Ehsan Latif, Gyeong-Geon Lee, Knut Neuman, Tamara Kastorff, Xiaoming Zhai
The advancement of natural language processing has paved the way for automated scoring systems in various languages, such as German (e. g., German BERT [G-BERT]).
no code implementations • 27 Dec 2023 • Gyeong-Geon Lee, Ehsan Latif, Lehong Shi, Xiaoming Zhai
This study compared the classification performance of Gemini Pro and GPT-4V in educational settings.
no code implementations • 20 Dec 2023 • Gyeong-Geon Lee, Seonyeong Mun, Myeong-Kyeong Shin, Xiaoming Zhai
This research aims to demonstrate that AI can function not only as a tool for learning, but also as an intelligent agent with which humans can engage in collaborative learning (CL) to change epistemic practices in science classrooms.
no code implementations • 10 Dec 2023 • Gyeong-Geon Lee, Lehong Shi, Ehsan Latif, Yizhu Gao, Arne Bewersdorff, Matthew Nyaaba, Shuchen Guo, Zihao Wu, Zhengliang Liu, Hui Wang, Gengchen Mai, Tiaming Liu, Xiaoming Zhai
This paper presents a comprehensive examination of how multimodal artificial intelligence (AI) approaches are paving the way towards the realization of Artificial General Intelligence (AGI) in educational contexts.
no code implementations • 30 Nov 2023 • Gyeong-Geon Lee, Ehsan Latif, Xuansheng Wu, Ninghao Liu, Xiaoming Zhai
We found a more balanced accuracy across different proficiency categories when CoT was used with a scoring rubric, highlighting the importance of domain-specific reasoning in enhancing the effectiveness of LLMs in scoring tasks.
no code implementations • 21 Nov 2023 • Gyeong-Geon Lee, Xiaoming Zhai
The results of this study show that utilizing GPT-4V for automatic scoring of student-drawn models is promising.
no code implementations • 25 Oct 2023 • Luyang Fang, Gyeong-Geon Lee, Xiaoming Zhai
The average maximum increase observed across two items is: 3. 5% for accuracy, 30. 6% for precision, 21. 1% for recall, and 24. 2% for F1 score.
no code implementations • 21 Aug 2023 • Chen Cao, Zijian Ding, Gyeong-Geon Lee, Jiajun Jiao, Jionghao Lin, Xiaoming Zhai
Our study demonstrates the potential of applying large language models to educational practice on STEM subjects.