Word embeddings have been widely adopted across several NLP applications.
Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.
In this paper, we investigate the task of learning word embeddings from very sparse data in an incremental, cognitively-plausible way.
Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.
Knowledge graphs are structured representations of facts in a graph, where nodes represent entities and edges represent relationships between them.
Our model family consists of a latent-variable generative model and a discriminative labeler.
Ranked #42 on Named Entity Recognition on CoNLL 2003 (English)