FEW-SHOTLEARNING WITH WEAK SUPERVISION
Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data. However, provided only a few training examples, many tasks are ambiguous. Such ambiguity can be mitigated with side information in terms of weak labels which is often readily available. In this paper, we propose a Bayesian gradient-based meta-learning algorithm that can incorporate weak labels to reduce task ambiguity and improve performance. Our approach is cast in the framework of amortized variational inference and trained by optimizing a variational lower bound. The proposed method is competitive to state-of-the-art methods and achieves significant performance gains in settings where weak labels are available.
PDF Abstract