Search Results for author: Xunyu Lin

Found 5 papers, 4 papers with code

Early Action Recognition with Action Prototypes

no code implementations11 Dec 2023 Guglielmo Camporese, Alessandro Bergamo, Xunyu Lin, Joseph Tighe, Davide Modolo

For example, on early recognition observing only the first 10% of each video, our method improves the SOTA by +2. 23 Top-1 accuracy on Something-Something-v2, +3. 55 on UCF-101, +3. 68 on SSsub21, and +5. 03 on EPIC-Kitchens-55, where prior work used either multi-modal inputs (e. g. optical-flow) or batched inference.

Action Recognition Optical Flow Estimation

Generation of Virtual Dual Energy Images from Standard Single-Shot Radiographs using Multi-scale and Conditional Adversarial Network

1 code implementation22 Oct 2018 Bo Zhou, Xunyu Lin, Brendan Eck, Jun Hou, David L. Wilson

Dual-energy (DE) chest radiographs provide greater diagnostic information than standard radiographs by separating the image into bone and soft tissue, revealing suspicious lesions which may otherwise be obstructed from view.

Disentangling Motion, Foreground and Background Features in Videos

1 code implementation13 Jul 2017 Xunyu Lin, Victor Campos, Xavier Giro-i-Nieto, Jordi Torres, Cristian Canton Ferrer

This paper introduces an unsupervised framework to extract semantically rich features for video representation.

Decomposing Motion and Content for Natural Video Sequence Prediction

1 code implementation25 Jun 2017 Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Future prediction Video Prediction

Learning to Generate Long-term Future via Hierarchical Prediction

2 code implementations ICML 2017 Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.

Video Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.