1 code implementation • 18 May 2023 • Wanyong Feng, Aritra Ghosh, Stephen Sireci, Andrew S. Lan
Computerized adaptive testing (CAT) is a form of personalized testing that accurately measures students' knowledge levels while reducing test length.
no code implementations • 28 Oct 2021 • Yun-Wei Chu, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew S. Lan, Christopher G. Brinton
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
no code implementations • EMNLP 2021 • Zichao Wang, Andrew S. Lan, Richard G. Baraniuk
We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario.
1 code implementation • 11 Dec 2020 • Aritra Ghosh, Andrew S. Lan
This paper details our solutions to Tasks 1&2 of the NeurIPS 2020 Education Challenge. 1 Knowledge tracing, a family of methods to estimate each student’s mastery levels on skills/knowledge components from their past responses to assessment questions, is useful for progress monitoring, personalization, and helping teachers to deliver personalized and targeted feedback to students to improve their learning outcomes.
1 code implementation • 24 Jul 2020 • Aritra Ghosh, Neil Heffernan, Andrew S. Lan
We also conduct several case studies and show that AKT exhibits excellent interpretability and thus has potential for automated feedback and personalization in real-world educational settings.
no code implementations • 25 May 2020 • Shashank Sonkar, Andrew E. Waters, Andrew S. Lan, Phillip J. Grimaldi, Richard G. Baraniuk
Knowledge tracing (KT) models, e. g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Tsung-Yen Yang, Andrew S. Lan, Karthik Narasimhan
Learning representations of spatial references in natural language is a key challenge in tasks like autonomous navigation and robotic manipulation.
1 code implementation • 28 Jan 2020 • Ramina Ghods, Andrew S. Lan, Tom Goldstein, Christoph Studer
To address this issue, a variety of methods that rely on random parameter initialization or knowledge distillation have been proposed in the past.
no code implementations • 21 May 2019 • Indu Manickam, Andrew S. Lan, Gautam Dasarathy, Richard G. Baraniuk
We apply this framework to the last two months of the election period for a group of 47508 Twitter users and demonstrate that both liberal and conservative users became more polarized over time.
no code implementations • ICML 2018 • Andrew S. Lan, Mung Chiang, Christoph Studer
The Rasch model is widely used for item response analysis in applications ranging from recommender systems to psychology, education, and finance.
no code implementations • ICML 2018 • Ramina Ghods, Andrew S. Lan, Tom Goldstein, Christoph Studer
Phase retrieval refers to the problem of recovering real- or complex-valued vectors from magnitude measurements.
no code implementations • 1 Feb 2018 • Andrew S. Lan, Mung Chiang, Christoph Studer
We showcase the efficacy of our methods and results for a number of synthetic and real-world datasets, which demonstrates that linearized binary regression finds potential use in a variety of inference, estimation, signal processing, and machine learning applications that deal with binary-valued observations or measurements.
no code implementations • 24 Mar 2017 • Joshua J. Michalenko, Andrew S. Lan, Richard G. Baraniuk
An important, yet largely unstudied, problem in student data analysis is to detect misconceptions from students' responses to open-response questions.
no code implementations • 18 Jan 2015 • Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard G. Baraniuk
Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors.
no code implementations • 18 Dec 2014 • Andrew S. Lan, Christoph Studer, Richard G. Baraniuk
The recently proposed SPARse Factor Analysis (SPARFA) framework for personalized learning performs factor analysis on ordinal or binary-valued (e. g., correct/incorrect) graded learner responses to questions.
no code implementations • 18 Dec 2014 • Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk
SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts.
no code implementations • 19 Dec 2013 • Andrew S. Lan, Christoph Studer, Richard G. Baraniuk
We propose SPARFA-Trace, a new machine learning-based framework for time-varying learning and content analytics for education applications.
no code implementations • 8 May 2013 • Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk
In order to better interpret the estimated latent concepts, SPARFA relies on a post-processing step that utilizes user-defined tags (e. g., topics or keywords) available for each question.
no code implementations • 22 Mar 2013 • Andrew S. Lan, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk
We estimate these factors given the graded responses to a collection of questions.