no code implementations • 4 Feb 2024 • Neisarg Dave, Daniel Kifer, C. Lee Giles, Ankur Mali
We sampled the datasets from $7$ Tomita and $4$ Dyck grammars and trained them on $4$ RNN cells: LSTM, GRU, O2RNN, and MIRNN.
1 code implementation • WS 2018 • Chen Liang, Xiao Yang, Neisarg Dave, Drew Wham, Bart Pursel, C. Lee Giles
We investigate how machine learning models, specifically ranking models, can be used to select useful distractors for multiple choice questions.