Search Results for author: Brendon Boldt

Found 8 papers, 2 papers with code

Mathematically Modeling the Lexicon Entropy of Emergent Language

1 code implementation28 Nov 2022 Brendon Boldt, David Mortensen

We formulate a stochastic process, FiLex, as a mathematical model of lexicon entropy in deep learning-based emergent language systems.

Modeling Emergent Lexicon Formation with a Self-Reinforcing Stochastic Process

1 code implementation22 Jun 2022 Brendon Boldt, David Mortensen

We introduce FiLex, a self-reinforcing stochastic process which models finite lexicons in emergent language experiments.

Recommendations for Systematic Research on Emergent Language

no code implementations22 Jun 2022 Brendon Boldt, David Mortensen

Emergent language is unique among fields within the discipline of machine learning for its open-endedness, not obviously presenting well-defined problems to be solved.

Shaped Rewards Bias Emergent Language

no code implementations29 Sep 2021 Brendon Boldt, Yonatan Bisk, David R Mortensen

The second is shaped rewards which are designed specifically to make the task easier to learn by introducing biases in the learning process.

Inductive Bias

Case Study: Deontological Ethics in NLP

no code implementations NAACL 2021 Shrimai Prabhumoye, Brendon Boldt, Ruslan Salakhutdinov, Alan W Black

Recent work in natural language processing (NLP) has focused on ethical challenges such as understanding and mitigating bias in data and algorithms; identifying objectionable content like hate speech, stereotypes and offensive language; and building frameworks for better system design and data handling practices.

Ethics

Detecting Compromised Implicit Association Test Results Using Supervised Learning

no code implementations3 Sep 2019 Brendon Boldt, Zack While, Eric Breimer

An implicit association test is a human psychological test used to measure subconscious associations.

Using LSTMs to Model the Java Programming Language

no code implementations26 Aug 2019 Brendon Boldt

Recurrent neural networks (RNNs), specifically long-short term memory networks (LSTMs), can model natural language effectively.

Precise but Natural Specification for Robot Tasks

no code implementations6 Mar 2018 Ivan Gavran, Brendon Boldt, Eva Darulova, Rupak Majumdar

We present Flipper, a natural language interface for describing high-level task specifications for robots that are compiled into robot actions.

Cannot find the paper you are looking for? You can Submit a new open access paper.