no code implementations • 31 Jan 2024 • Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan
We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not during the final step which relies on the problem's arithmetic expressions (solution execution).
no code implementations • 27 Nov 2023 • Andreas Opedal, Eleftheria Tsipidi, Tiago Pimentel, Ryan Cotterell, Tim Vieira
The left-corner transformation (Rosenkrantz and Lewis, 1970) is used to remove left recursion from context-free grammars, which is an important step towards making the grammar parsable top-down with simple techniques.
1 code implementation • 6 Jul 2023 • Andreas Opedal, Ran Zmigrod, Tim Vieira, Ryan Cotterell, Jason Eisner
This paper provides a reference description, in the form of a deduction system, of Earley's (1970) context-free parsing algorithm with various speed-ups.
1 code implementation • 7 Jun 2023 • Andreas Opedal, Niklas Stoehr, Abulhair Saparov, Mrinmaya Sachan
In this paper, we consolidate previous work on categorizing and representing math story problems and develop MathWorld, which is a graph-based semantic formalism specific for the domain of math story problems.
1 code implementation • 14 Sep 2022 • Clemente Pasti, Andreas Opedal, Tiago Pimentel, Tim Vieira, Jason Eisner, Ryan Cotterell
It shows, by a simple construction, that the intersection of a context-free language and a regular language is itself context-free.
1 code implementation • ACL 2022 • Daphna Keidar, Andreas Opedal, Zhijing Jin, Mrinmaya Sachan
We analyze the semantic change and frequency shift of slang words and compare them to those of standard, nonslang words.
1 code implementation • ICLR Workshop GTRL 2021 • Cristina Guzman, Daphna Keidar, Tristan Meynier, Andreas Opedal, Niklas Stoehr
We first learn the generative BA parameters in a supervised fashion using a Graph Neural Network (GNN) and a Random Forest Regressor, by minimizing the squared loss between the true generative parameters and the latent variables.