Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering

25 Jul 2017  ยท  Yi Tay, Luu Anh Tuan, Siu Cheung Hui ยท

The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, \textsc{HyperQA}, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind \textsc{HyperQA} is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SemEvalCQA HyperQA P@1 0.809 # 1
MAP 0.795 # 1
Question Answering TrecQA HyperQA MAP 0.770 # 10
MRR 0.825 # 9
Question Answering WikiQA HyperQA MAP 0.712 # 10
MRR 0.727 # 10
Question Answering YahooCQA LSTM P@1 0.465 # 6
MRR 0.669 # 6
Question Answering YahooCQA HyperQA P@1 0.683 # 2
MRR 0.801 # 3
Question Answering YahooCQA CNN P@1 0.413 # 7
MRR 0.632 # 7

Methods


No methods listed for this paper. Add relevant methods here