Referring expression generation

12 papers with code • 0 benchmarks • 1 datasets

Generate referring expressions

Most implemented papers

Modeling Context in Referring Expressions

lichengunc/refer 31 Jul 2016

Humans refer to objects in their environments all the time, especially in dialogue with other people.

Kosmos-2: Grounding Multimodal Large Language Models to the World

microsoft/unilm 26 Jun 2023

We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e. g., bounding boxes) and grounding text to the visual world.

NeuralREG: An end-to-end approach to referring expression generation

ThiagoCF05/NeuralREG ACL 2018

Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function.

Enriching the WebNLG corpus

ThiagoCF05/webnlg WS 2018

This paper describes the enrichment of WebNLG corpus (Gardent et al., 2017a, b), with the aim to further extend its usefulness as a resource for evaluating common NLG tasks, including Discourse Ordering, Lexicalization and Referring Expression Generation.

Referring Expression Generation Using Entity Profiles

mcao610/ProfileREG IJCNLP 2019

Referring Expression Generation (REG) is the task of generating contextually appropriate references to entities.

Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation

AmitMY/chimera WS 2019

We follow the step-by-step approach to neural data-to-text generation we proposed in Moryossef et al (2019), in which the generation process is divided into a text-planning stage followed by a plan-realization stage.

Pento-DIARef: A Diagnostic Dataset for Learning the Incremental Algorithm for Referring Expression Generation from Examples

clp-research/pento-diaref 24 May 2023

NLP tasks are typically defined extensionally through datasets containing example instantiations (e. g., pairs of image i and text t), but motivated intensionally through capabilities invoked in verbal descriptions of the task (e. g., "t is a description of i, for which the content of i needs to be recognised and understood").

Whether you can locate or not? Interactive Referring Expression Generation

superhero-7/ireg 19 Aug 2023

Referring Expression Generation (REG) aims to generate unambiguous Referring Expressions (REs) for objects in a visual scene, with a dual task of Referring Expression Comprehension (REC) to locate the referred object.

Collecting Visually-Grounded Dialogue with A Game Of Sorts

willemsenbram/a-game-of-sorts LREC 2022

We address these concerns by introducing a collaborative image ranking task, a grounded agreement game we call "A Game Of Sorts".

GLaMM: Pixel Grounding Large Multimodal Model

mbzuai-oryx/groundingLMM 6 Nov 2023

In this work, we present Grounding LMM (GLaMM), the first model that can generate natural language responses seamlessly intertwined with corresponding object segmentation masks.