"The Sum of Its Parts": Joint Learning of Word and Phrase Representations with Autoencoders

18 Jun 2015  ·  Rémi Lebret, Ronan Collobert ·

Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challenge. We introduce a novel model that jointly learns word vector representations and their summation. Word representations are learnt using the word co-occurrence statistical information. To embed sequences of words (i.e. phrases) with different sizes into a common semantic space, we propose to average word vector representations. In contrast with previous methods which reported a posteriori some compositionality aspects by simple summation, we simultaneously train words to sum, while keeping the maximum information from the original vectors. We evaluate the quality of the word representations on several classical word evaluation tasks, and we introduce a novel task to evaluate the quality of the phrase representations. While our distributed representations compete with other methods of learning word representations on word evaluations, we show that they give better performance on the phrase evaluation. Such representations of phrases could be interesting for many tasks in natural language processing.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here