Systematic Generalization

61 papers with code • 0 benchmarks • 7 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Systematic Generalization models and implementations

Most implemented papers

Measuring Systematic Generalization in Neural Proof Generation with Transformers

NicolasAG/SGinPG NeurIPS 2020

We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs.

Systematic Generalization on gSCAN: What is Nearly Solved and What is Next?

LauraRuis/groundedSCAN EMNLP 2021

We analyze the grounded SCAN (gSCAN) benchmark, which was recently proposed to study systematic generalization for grounded language understanding.

VIMA: General Robot Manipulation with Multimodal Prompts

vimalabs/VIMABench 6 Oct 2022

We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens.

Compositional generalization in a deep seq2seq model by separating syntax and semantics

jlrussin/syntactic_attention 22 Apr 2019

Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution.

Capacity, Bandwidth, and Compositionality in Emergent Language Learning

backpropper/cbc-emecom 24 Oct 2019

In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages.

Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

atticusg/MoNLI EMNLP (BlackboxNLP) 2020

We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions.

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

benbogin/glt-grounded-latent-trees-qa 1 Jul 2020

However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples.

Compositional Networks Enable Systematic Generalization for Grounded Language Understanding

ylkuo/compositional-gscan Findings (EMNLP) 2021

Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks.

Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks

RobertCsordas/modules ICLR 2021

Neural networks (NNs) whose subnetworks implement reusable functions are expected to offer numerous advantages, including compositionality through efficient recombination of functional building blocks, interpretability, preventing catastrophic interference, etc.

CURI: A Benchmark for Productive Concept Learning Under Uncertainty

facebookresearch/productive_concept_learning 6 Oct 2020

Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts ("a scene with objects that have the same color") and ad-hoc categories defined through goals ("objects that could fall on one's head").