Generating program code for domain-specific tasks
The main experimental result in this paper is that a single Neural Programmer model achieves 34. 2% accuracy using only 10, 000 examples with weak supervision.
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer.
To improve learning performance, we explore the idea of forgetting, where a learner can additionally remove programs from its BK.
INDUCTIVE LOGIC PROGRAMMING MULTI-TASK LEARNING PROGRAM INDUCTION
In this approach, a program induction system (the learner) is given a set of tasks and initial background knowledge.
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
KNOWLEDGE BASE QUESTION ANSWERING META REINFORCEMENT LEARNING PROGRAM INDUCTION
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.
Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule.
From this prototype tree we form program instances which we evaluate on a given problem.