no code implementations • 10 May 2024 • Hunter McNichols, Jaewook Lee, Stephen Fancsali, Steve Ritter, Andrew Lan
We fine-tune both open-source and proprietary LLMs on real student responses and corresponding ITS-provided feedback.
no code implementations • 1 May 2024 • Jaewook Lee, Digory Smith, Simon Woodhead, Andrew Lan
We conduct a pilot study involving math educators to investigate how the tool can help them simplify the process of crafting high-quality math MCQs.
no code implementations • 1 May 2024 • Hasnain Heickal, Andrew Lan
These methods ask the LLM to generate feedback given the problem statement and a student's (buggy) submission.
no code implementations • 19 Apr 2024 • Alexander Scarlatos, Wanyong Feng, Digory Smith, Simon Woodhead, Andrew Lan
Multiple-choice questions (MCQs) are commonly used across all levels of math education since they can be deployed and graded at a large scale.
1 code implementation • 2 Apr 2024 • Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, Andrew Lan
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in assessments and practices.
no code implementations • 3 Mar 2024 • Nigel Fernandez, Alexander Scarlatos, Andrew Lan
Automated teaching assistants and chatbots have significant potential to reduce the workload of human instructors, especially for logistics-related question answering, which is important to students yet repetitive for instructors.
1 code implementation • 2 Mar 2024 • Alexander Scarlatos, Digory Smith, Simon Woodhead, Andrew Lan
Second, we propose a framework for feedback generation that optimizes both correctness and alignment using reinforcement learning (RL).
1 code implementation • 1 Mar 2024 • Nischal Ashok Kumar, Andrew Lan
The Socratic method is a way of guiding students toward solving a problem independently without directly revealing the solution to the problem.
1 code implementation • 11 Feb 2024 • Nischal Ashok Kumar, Andrew Lan
The goal of our work is to propose a fully automated approach for test case generation that can accurately measure student knowledge, which is important for two reasons.
no code implementations • 7 Aug 2023 • Hunter McNichols, Wanyong Feng, Jaewook Lee, Alexander Scarlatos, Digory Smith, Simon Woodhead, Andrew Lan
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable form of assessment.
1 code implementation • 15 Jun 2023 • Nischal Ashok Kumar, Nigel Fernandez, Zichao Wang, Andrew Lan
Reading comprehension is a crucial skill in many aspects of education, including language learning, cognitive development, and fostering early literacy skills in children.
no code implementations • 1 Jun 2023 • Mengxue Zhang, Neil Heffernan, Andrew Lan
In this paper, we investigate a collection of models that account for the individual preferences and tendencies of each human scorer in the automated scoring task.
no code implementations • 1 Jun 2023 • Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi Feng, Andrew Lan
We propose a step-by-step planning approach for intermediate solution generation, which strategically plans the generation of the next solution step based on the MWP and the previous solution steps.
2 code implementations • 23 May 2023 • Alexander Scarlatos, Andrew Lan
Recent developments in large pre-trained language models have enabled unprecedented performance on a variety of downstream tasks.
no code implementations • 11 May 2023 • Jaewook Lee, Andrew Lan
Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues.
1 code implementation • 11 May 2023 • Nischal Ashok Kumar, Wanyong Feng, Jaewook Lee, Hunter McNichols, Aritra Ghosh, Andrew Lan
In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i. e., finding the underlying causal relationship among different skills from real-world student response data.
1 code implementation • 8 May 2023 • Hunter McNichols, Mengxue Zhang, Andrew Lan
Existing data-driven methods avoid these limitations but specifically require mathematical expressions in student responses to be parsed into syntax trees.
1 code implementation • 15 Feb 2023 • Alexander Scarlatos, Andrew Lan
In this paper, we propose a series of modifications to existing language models to jointly represent and generate text and math: representing mathematical expressions as sequences of node tokens in their operator tree format, using math symbol and tree position embeddings to preserve the semantic and structural properties of mathematical expressions, and using a constrained decoding method to generate mathematically valid expressions.
no code implementations • 5 Dec 2022 • Yun-Wei Chu, Seyyedali Hosseinalipour, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew Lan, Christopher Brinton
Traditional learning-based approaches to student modeling (e. g., predicting grades based on measured activities) generalize poorly to underrepresented/minority student groups due to biases in data availability.
no code implementations • 2 Aug 2022 • Yun-Wei Chu, Seyyedali Hosseinalipour, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew Lan, Christopher Brinton
To learn better representations of student activity, we augment our approach with a self-supervised behavioral pretraining methodology that leverages multiple modalities of student behavior (e. g., visits to lecture videos and participation on forums), and include a neural network attention mechanism in the model aggregation stage.
1 code implementation • 30 May 2022 • Mengxue Zhang, Sami Baral, Neil Heffernan, Andrew Lan
In this paper, we study the problem of automatic short answer grading for students' responses to math questions and propose a novel framework for this task.
1 code implementation • 19 May 2022 • Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, Andrew Lan
Our approach, in-context BERT fine-tuning, produces a single shared scoring model for all items with a carefully-designed input structure to provide contextual information on each item.
1 code implementation • 28 Apr 2022 • Alexander Scarlatos, Christopher Brinton, Andrew Lan
One can use process data for many downstream tasks such as learning outcome prediction and automatically delivering personalized intervention.
1 code implementation • 21 Feb 2022 • Naiming Liu, Zichao Wang, Richard G. Baraniuk, Andrew Lan
In education applications, knowledge tracing refers to the problem of estimating students' time-varying concept/skill mastery level from their past responses to questions and predicting their future performance.
no code implementations • 8 Dec 2021 • Aritra Ghosh, Saayan Mitra, Andrew Lan
In sequential recommender system applications, it is important to develop models that can capture users' evolving interest over time to successfully recommend future items that they are likely to interact with.
2 code implementations • 17 Aug 2021 • Aritra Ghosh, Andrew Lan
Computerized adaptive testing (CAT) refers to a form of tests that are personalized to every student/test taker.
no code implementations • 25 Apr 2021 • Mengxue Zhang, Zichao Wang, Richard Baraniuk, Andrew Lan
Feedback on student answers and even during intermediate steps in their solutions to open-ended questions is an important element in math education.
2 code implementations • 19 Apr 2021 • Aritra Ghosh, Jay Raspat, Andrew Lan
Knowledge tracing refers to a family of methods that estimate each student's knowledge component/skill mastery level from their past responses to questions.
1 code implementation • 19 Apr 2021 • Aritra Ghosh, Andrew Lan
One common type of method that can mitigate the impact of label noise can be viewed as supervised robust methods; one can simply replace the CCE loss with a loss that is robust to label noise, or re-weight training samples and down-weight those with higher loss values.
Ranked #28 on Image Classification on Clothing1M
2 code implementations • 19 Apr 2021 • Aritra Ghosh, Andrew Lan
Consequently, several recently proposed methods, such as Meta-Weight-Net (MW-Net), use a small number of unbiased, clean samples to learn a weighting function that downweights samples that are likely to have corrupted labels under the meta-learning framework.
1 code implementation • IEEE International Conference on Data Mining Workshops (ICDM Workshops) 2021 • Shalini Pandey, Andrew Lan, George Karypis, Jaideep Srivastava
The projection operation learns to estimate future embedding of students and threads.
no code implementations • 19 Jan 2021 • Setareh Maghsudi, Andrew Lan, Jie Xu, Mihaela van der Schaar
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to ultimately meet her desired goal.
no code implementations • 10 Jan 2021 • Shalini Pandey, Andrew Lan, George Karypis, Jaideep Srivastava
The projection operation learns to estimate future embedding of students and threads.
no code implementations • 27 May 2020 • Zichao Wang, Yi Gu, Andrew Lan, Richard Baraniuk
We propose VarFA, a variational inference factor analysis framework that extends existing factor analysis models for educational data mining to efficiently output uncertainty estimation in the model's estimated factors.