Search Results for author: Amirhossein Kazemnejad

Found 4 papers, 2 papers with code

The Impact of Positional Encoding on Length Generalization in Transformers

2 code implementations NeurIPS 2023 Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy

In this paper, we conduct a systematic empirical study comparing the length generalization performance of decoder-only Transformers with five different position encoding approaches including Absolute Position Embedding (APE), T5's Relative PE, ALiBi, and Rotary, in addition to Transformers without positional encoding (NoPE).

Position

Paraphrase Generation by Learning How to Edit from Samples

no code implementations ACL 2020 Amirhossein Kazemnejad, Mohammadreza Salehi, Mahdieh Soleymani Baghshah

With its novel editor module, the model then paraphrases the input sequence by editing it using the extracted relations between the retrieved pair of sentences.

Paraphrase Generation Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.