Visually Grounding Instruction for History-Dependent Manipulation

16 Dec 2020 Hyemin Ahn Obin Kwon Kyoungdo Kim Dongheui Lee Songhwai Oh

This paper emphasizes the importance of robot's ability to refer its task history, when it executes a series of pick-and-place manipulations by following text instructions given one by one. The advantage of referring the manipulation history can be categorized into two folds: (1) the instructions omitting details or using co-referential expressions can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
GAN Least Squares Loss
Loss Functions
Residual Connection
Skip Connections
Tanh Activation
Activation Functions
ReLU
Activation Functions
PatchGAN
Discriminators
Convolution
Convolutions
Instance Normalization
Normalization
Cycle Consistency Loss
Loss Functions
Sigmoid Activation
Activation Functions
Leaky ReLU
Activation Functions
Batch Normalization
Normalization
Residual Block
Skip Connection Blocks
CycleGAN
Generative Models