Sentence

3396 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Sentence models and implementations

Most implemented papers

Neural Machine Translation by Jointly Learning to Align and Translate

graykode/nlp-tutorial 1 Sep 2014

Neural machine translation is a recently proposed approach to machine translation.

Convolutional Neural Networks for Sentence Classification

PaddlePaddle/PaddleNLP EMNLP 2014

We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.

Show and Tell: A Neural Image Caption Generator

karpathy/neuraltalk CVPR 2015

Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.

Sequence to Sequence Learning with Neural Networks

bentrevett/pytorch-seq2seq NeurIPS 2014

Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

UKPLab/sentence-transformers IJCNLP 2019

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

A Structured Self-attentive Sentence Embedding

jadore801120/attention-is-all-you-need-pytorch 9 Mar 2017

This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention.

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

google-research/ALBERT ICLR 2020

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

Effective Approaches to Attention-based Neural Machine Translation

philipperemy/keras-attention-mechanism EMNLP 2015

Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25. 9 BLEU points, an improvement of 1. 0 BLEU points over the existing best system backed by NMT and an n-gram reranker.

Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

NVIDIA/DeepLearningExamples 26 Sep 2016

To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder.