Search Results for author: Luheng Wang

Found 1 papers, 1 papers with code

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

1 code implementation Findings (ACL) 2022 Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, Sebastian Schuster

We find that pre-trained seq2seq models generalize hierarchically when performing syntactic transformations, whereas models trained from scratch on syntactic transformations do not.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.