Search Results for author: Jacob Walker

Found 11 papers, 4 papers with code

Video as the New Language for Real-World Decision Making

no code implementations27 Feb 2024 Sherry Yang, Jacob Walker, Jack Parker-Holder, Yilun Du, Jake Bruce, Andre Barreto, Pieter Abbeel, Dale Schuurmans

Moreover, we demonstrate how, like language models, video generation can serve as planners, agents, compute engines, and environment simulators through techniques such as in-context learning, planning and reinforcement learning.

Decision Making In-Context Learning +2

Investigating the role of model-based learning in exploration and transfer

no code implementations8 Feb 2023 Jacob Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Théophane Weber, Jessica B. Hamrick

Our results show that intrinsic exploration combined with environment models present a viable direction towards agents that are self-supervised and able to generalize to novel reward functions.

Transfer Learning

Procedural Generalization by Planning with Self-Supervised World Models

no code implementations ICLR 2022 Ankesh Anand, Jacob Walker, Yazhe Li, Eszter Vértes, Julian Schrittwieser, Sherjil Ozair, Théophane Weber, Jessica B. Hamrick

One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks.

 Ranked #1 on Meta-Learning on ML10 (Meta-test success rate (zero-shot) metric)

Benchmarking Meta-Learning +2

Predicting Video with VQVAE

1 code implementation2 Mar 2021 Jacob Walker, Ali Razavi, Aäron van den Oord

In recent years, the task of video prediction-forecasting future video given past video frames-has attracted attention in the research community.

Video Generation Video Prediction

Representation Learning via Invariant Causal Mechanisms

2 code implementations15 Oct 2020 Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, Charles Blundell

Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.

Contrastive Learning Out-of-Distribution Generalization +3

The Pose Knows: Video Forecasting by Generating Pose Futures

1 code implementation ICCV 2017 Jacob Walker, Kenneth Marino, Abhinav Gupta, Martial Hebert

First we explicitly model the high level structure of active objects in the scene---humans---and use a VAE to model the possible future movements of humans in the pose space.

Human Pose Forecasting Video Prediction

An Uncertain Future: Forecasting from Static Images using Variational Autoencoders

no code implementations25 Jun 2016 Jacob Walker, Carl Doersch, Abhinav Gupta, Martial Hebert

We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous.

Dense Optical Flow Prediction from a Static Image

no code implementations ICCV 2015 Jacob Walker, Abhinav Gupta, Martial Hebert

Because our CNN model makes no assumptions about the underlying scene, it can predict future optical flow on a diverse set of scenarios.

motion prediction Optical Flow Estimation

Patch to the Future: Unsupervised Visual Prediction

no code implementations CVPR 2014 Jacob Walker, Abhinav Gupta, Martial Hebert

In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.