Generating program code for domain-specific tasks
The main experimental result in this paper is that a single Neural Programmer model achieves 34. 2% accuracy using only 10, 000 examples with weak supervision.
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer.
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.
Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule.