Novel Concepts

51 papers with code • 0 benchmarks • 0 datasets

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

Most implemented papers

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Dynamic Few-Shot Visual Learning without Forgetting

gidariss/FewShotWithoutForgetting CVPR 2018

In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).

Revisit Systematic Generalization via Meaningful Learning

shininglab/systematic-generalization-via-meaningful-learning 14 Mar 2020

Humans can systematically generalize to novel compositions of existing concepts.

DER: Dynamically Expandable Representation for Class Incremental Learning

Rhyssiyan/DER-ClassIL.pytorch CVPR 2021

We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

Is CLIP the main roadblock for fine-grained open-world perception?

lorebianchi98/fg-clip 4 Apr 2024

Modern applications increasingly demand flexible computer vision models that adapt to novel concepts not encountered during training.

Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images

mjhucla/TF-mRNN ICCV 2015

In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task.

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

LisaAnne/DCC CVPR 2016

Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet.

Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts

salman-h-khan/ZSD_Release 16 Mar 2018

We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the `recognition' and `localization' of an unseen category.