Long Short-Term Memory-Networks for Machine Reading

EMNLP 2016  ·  Jianpeng Cheng, Li Dong, Mirella Lapata ·

In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art.

PDF Abstract EMNLP 2016 PDF EMNLP 2016 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference SNLI 450D LSTMN with deep attention fusion % Test Accuracy 86.3 # 56
% Train Accuracy 88.5 # 54
Parameters 3.4m # 4
Natural Language Inference SNLI 300D LSTMN with deep attention fusion % Test Accuracy 85.7 # 67
% Train Accuracy 87.3 # 57
Parameters 1.7m # 4

Methods