Systematic Generalization
61 papers with code • 0 benchmarks • 7 datasets
Benchmarks
These leaderboards are used to track progress in Systematic Generalization
Libraries
Use these libraries to find Systematic Generalization models and implementationsMost implemented papers
Measuring Systematic Generalization in Neural Proof Generation with Transformers
We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs.
Systematic Generalization on gSCAN: What is Nearly Solved and What is Next?
We analyze the grounded SCAN (gSCAN) benchmark, which was recently proposed to study systematic generalization for grounded language understanding.
VIMA: General Robot Manipulation with Multimodal Prompts
We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens.
Compositional generalization in a deep seq2seq model by separating syntax and semantics
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution.
Capacity, Bandwidth, and Compositionality in Emergent Language Learning
In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages.
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions.
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples.
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding
Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks.
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Neural networks (NNs) whose subnetworks implement reusable functions are expected to offer numerous advantages, including compositionality through efficient recombination of functional building blocks, interpretability, preventing catastrophic interference, etc.
CURI: A Benchmark for Productive Concept Learning Under Uncertainty
Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts ("a scene with objects that have the same color") and ad-hoc categories defined through goals ("objects that could fall on one's head").