Search Results for author: Seung Byum Seo

Found 3 papers, 2 papers with code

MM-GATBT: Enriching Multimodal Representation Using Graph Attention Network

1 code implementation NAACL (ACL) 2022 Seung Byum Seo, Hyoungwook Nam, Payam Delgosha

While there have been advances in Natural Language Processing (NLP), their success is mainly gained by applying a self-attention mechanism into single or multi-modalities.

Graph Attention Graph Representation Learning

Neural Attention Memory

no code implementations18 Feb 2023 Hyoungwook Nam, Seung Byum Seo

We propose a novel perspective of the attention mechanism by reinventing it as a memory architecture for neural networks, namely Neural Attention Memory (NAM).

Few-Shot Learning Zero-shot Generalization

I-BERT: Inductive Generalization of Transformer to Arbitrary Context Lengths

1 code implementation18 Jun 2020 Hyoungwook Nam, Seung Byum Seo, Vikram Sharma Mailthody, Noor Michael, Lan Li

The model inductively generalizes on a variety of algorithmic tasks where state-of-the-art Transformer models fail to do so.

Language Modelling Masked Language Modeling

Cannot find the paper you are looking for? You can Submit a new open access paper.