Discrete Word Embedding for Logical Natural Language Understanding
We propose an unsupervised neural model for learning a discrete embedding of words. Unlike existing discrete embeddings, our binary embedding supports vector arithmetic operations similar to continuous embeddings. Our embedding represents each word as a set of propositional statements describing a transition rule in classical/STRIPS planning formalism. This makes the embedding directly compatible with symbolic, state of the art classical planning solvers.
PDF AbstractDatasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
Adam •
Attention Dropout •
BPE •
Cosine Annealing •
Dense Connections •
Dropout •
Fixed Factorized Attention •
GELU •
GPT-3 •
Layer Normalization •
Linear Layer •
Linear Warmup With Cosine Annealing •
Multi-Head Attention •
Residual Connection •
Scaled Dot-Product Attention •
Softmax •
Strided Attention •
Weight Decay