Rethinking Complex Neural Network Architectures for Document Classification

Neural network models for many NLP tasks have grown increasingly complex in recent years, making training and deployment more difficult. A number of recent papers have questioned the necessity of such architectures and found that well-executed, simpler models are quite effective. We show that this is also the case for document classification: in a large-scale reproducibility study of several recent neural models, we find that a simple BiLSTM architecture with appropriate regularization yields accuracy and F1 that are either competitive or exceed the state of the art on four standard benchmark datasets. Surprisingly, our simple model is able to achieve these results without attention mechanisms. While these regularization techniques, borrowed from language modeling, are not novel, to our knowledge we are the first to apply them in this context. Our work provides an open-source platform and the foundation for future work in document classification.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Document Classification IMDb-M LSTM-reg (single model) Accuracy 52.8 # 2
Document Classification Reuters-21578 LSTM-reg (single model) F1 87.0 # 4
Text Classification Yelp-5 LSTM-reg (single moedl) Accuracy 68.7% # 5

Methods