no code implementations • 24 Jul 2023 • Reuben Tan, Matthias De Lange, Michael Iuzzolino, Bryan A. Plummer, Kate Saenko, Karl Ridgeway, Lorenzo Torresani
To alleviate this issue, we propose Multiscale Video Pretraining (MVP), a novel self-supervised pretraining approach that learns robust representations for forecasting by learning to predict contextualized representations of future video clips over multiple timescales.
1 code implementation • 11 Jul 2023 • Matthias De Lange, Hamid Eghbalzadeh, Reuben Tan, Michael Iuzzolino, Franziska Meier, Karl Ridgeway
We introduce an evaluation framework that directly exploits the user's data stream with new metrics to measure the adaptation gain over the population model, online generalization, and hindsight performance.
no code implementations • 4 Oct 2021 • Satoshi Tsutsui, Ruta Desai, Karl Ridgeway
We are particularly interested in learning egocentric video representations benefiting from the head-motion generated by users' daily activities, which can be easily obtained from IMU sensors embedded in AR/VR devices.
no code implementations • 25 Sep 2019 • Tyler R. Scott, Karl Ridgeway, Michael C. Mozer
We propose a probabilistic method that treats embeddings as random variables.
1 code implementation • NeurIPS 2018 • Tyler Scott, Karl Ridgeway, Michael C. Mozer
We hope our results will motivate a unification of research in weight transfer, deep metric learning, and few-shot learning.
no code implementations • ICLR 2019 • Karl Ridgeway, Michael C. Mozer
We present a domain-independent method that permits the open-ended recombination of style of one image with the content of another.
2 code implementations • 22 May 2018 • Tyler R. Scott, Karl Ridgeway, Michael C. Mozer
We hope our results will motivate a unification of research in weight transfer, deep metric learning, and few-shot learning.
3 code implementations • NeurIPS 2018 • Karl Ridgeway, Michael C. Mozer
Deep-embedding methods aim to discover representations of a domain that make explicit the domain's class structure and thereby support few-shot learning.
no code implementations • 15 Dec 2016 • Karl Ridgeway
Supervised inductive biases are constraints on the representations based on additional information connected to observations.
1 code implementation • 19 Nov 2015 • Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel
We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM).