Search Results for author: Joe Pater

Found 5 papers, 0 papers with code

Seq2Seq Models with Dropout can Learn Generalizable Reduplication

no code implementations WS 2018 Br Prickett, on, Aaron Traylor, Joe Pater

Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999).

Learning opacity in Stratal Maximum Entropy Grammar

no code implementations7 Mar 2017 Aleksei Nazarov, Joe Pater

Opaque phonological patterns are sometimes claimed to be difficult to learn; specific hypotheses have been advanced about the relative difficulty of particular kinds of opaque processes (Kiparsky 1971, 1973), and the kind of data that will be helpful in learning an opaque pattern (Kiparsky 2000).

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.