Search Results for author: Veit Elser

Found 9 papers, 4 papers with code

A logical word embedding for learning grammar

1 code implementation28 Apr 2023 Sean Deyo, Veit Elser

We introduce the logical grammar emdebbing (LGE), a model inspired by pregroup grammars and categorial grammars to enable unsupervised inference of lexical categories and syntactic rules from a corpus of text.

A transparent approach to data representation

1 code implementation27 Apr 2023 Sean Deyo, Veit Elser

We use a binary attribute representation (BAR) model to describe a data set of Netflix viewers' ratings of movies.

Attribute

Learning grammar with a divide-and-concur neural network

no code implementations18 Jan 2022 Sean Deyo, Veit Elser

We implement a divide-and-concur iterative projection approach to context-free grammar inference.

valid

Avoiding Traps in Nonconvex Problems

no code implementations9 Jun 2021 Sean Deyo, Veit Elser

Iterative projection methods may become trapped at non-solutions when the constraint sets are nonconvex.

Reconstructing cellular automata rules from observations at nonconsecutive times

1 code implementation3 Dec 2020 Veit Elser

Recent experiments by Springer and Kenyon have shown that a deep neural network can be trained to predict the action of $t$ steps of Conway's Game of Life automaton given millions of examples of this action on random initial states.

Learning Without Loss

1 code implementation29 Oct 2019 Veit Elser

The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.).

Retrieval

Monotone Learning with Rectified Wire Networks

no code implementations10 May 2018 Veit Elser, Dan Schmidt, Jonathan Yedidia

This constraint is simply that the value of the output node associated with the correct class should be zero.

A network that learns Strassen multiplication

no code implementations26 Jan 2016 Veit Elser

We study neural networks whose only non-linear components are multipliers, to test a new training rule in a context where the precise representation of data is paramount.

An Improved Three-Weight Message-Passing Algorithm

no code implementations8 May 2013 Nate Derbinsky, José Bento, Veit Elser, Jonathan S. Yedidia

We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC.

Cannot find the paper you are looking for? You can Submit a new open access paper.