Search Results for author: Robert C. Berwick

Found 7 papers, 0 papers with code

Parallel Algorithms for Exact Enumeration of Deep Neural Network Activation Regions

no code implementations29 Feb 2024 Sabrina Drammis, Bowen Zheng, Karthik Srinivasan, Robert C. Berwick, Nancy A. Lynch, Robert Ajemian

Our work has three main contributions: (1) we present a novel algorithm framework and parallel algorithms for region enumeration; (2) we implement one of our algorithms on a variety of network architectures and experimentally show how the number of regions dictates runtime; and (3) we show, using our algorithm's output, how the dimension of a region's affine transformation impacts further partitioning of the region by deeper layers.

Syntax-semantics interface: an algebraic model

no code implementations10 Nov 2023 Matilde Marcolli, Robert C. Berwick, Noam Chomsky

We extend our formulation of Merge and Minimalism in terms of Hopf algebras to an algebraic model of a syntactic-semantic interface.

Old and New Minimalism: a Hopf algebra comparison

no code implementations17 Jun 2023 Matilde Marcolli, Robert C. Berwick, Noam Chomsky

In this paper we compare some old formulations of Minimalism, in particular Stabler's computational minimalism, and Chomsky's new formulation of Merge and Minimalism, from the point of view of their mathematical description in terms of Hopf algebras.

Evaluating Universal Dependency Parser Recovery of Predicate Argument Structure via CompChain Analysis

no code implementations Joint Conference on Lexical and Computational Semantics 2021 Sagar Indurkhya, Beracah Yankama, Robert C. Berwick

We analyzed the distribution of compchains in three UD English treebanks, EWT, GUM and LinES, revealing that these treebanks are sparse with respect to sentences with predicate-argument structure that includes predicate-argument embedding.

Dependency Parsing

On the Computational Power of RNNs

no code implementations14 Jun 2019 Samuel A. Korsky, Robert C. Berwick

Recent neural network architectures such as the basic recurrent neural network (RNN) and Gated Recurrent Unit (GRU) have gained prominence as end-to-end learning architectures for natural language processing tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.