no code implementations • COLING 2020 • Shashank Sonkar, Andrew E. Waters, Richard G. Baraniuk
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models.
no code implementations • 25 May 2020 • Shashank Sonkar, Andrew E. Waters, Andrew S. Lan, Phillip J. Grimaldi, Richard G. Baraniuk
Knowledge tracing (KT) models, e. g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills.
no code implementations • 18 Jan 2015 • Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard G. Baraniuk
Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors.
no code implementations • 12 Jan 2015 • Ryan Ning, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk
In this work, we propose a novel methodology for unordered categorical IRT that we call SPRITE (short for stochastic polytomous response item model) that: (i) analyzes both ordered and unordered categories, (ii) offers interpretable outputs, and (iii) provides improved data fitting compared to existing models.
no code implementations • 18 Dec 2014 • Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk
SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts.
no code implementations • 8 May 2013 • Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk
In order to better interpret the estimated latent concepts, SPARFA relies on a post-processing step that utilizes user-defined tags (e. g., topics or keywords) available for each question.
no code implementations • 22 Mar 2013 • Andrew S. Lan, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk
We estimate these factors given the graded responses to a collection of questions.
no code implementations • NeurIPS 2011 • Andrew E. Waters, Aswin C. Sankaranarayanan, Richard Baraniuk
We consider the problem of recovering a matrix $\mathbf{M}$ that is the sum of a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from a small set of linear measurements of the form $\mathbf{y} = \mathcal{A}(\mathbf{M}) = \mathcal{A}({\bf L}+{\bf S})$.