no code implementations • 2 Aug 2019 • Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Lars Petersson, Stephen Gould, Amirhossein Habibian
In this paper, we introduce an approach to stochastically combine the root of variations with previous pose information, which forces the model to take the noise into account.
no code implementations • 22 Oct 2018 • Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, Lars Andersson
Action anticipation is critical in scenarios where one needs to react before the action is finalized.
no code implementations • ECCV 2018 • Fatemeh Sadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann, Lars Petersson, Jose M. Alvarez
Our approach builds on the observation that foreground and background classes are not affected in the same manner by the domain shift, and thus should be treated differently.
no code implementations • ICCV 2017 • Fatemeh Sadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann, Lars Petersson, Jose M. Alvarez
Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results.
no code implementations • 6 Jun 2017 • Fatemeh Sadat Saleh, Mohammad Sadegh Aliakbarian, Mathieu Salzmann, Lars Petersson, Jose M. Alvarez, Stephen Gould
We then show how to obtain multi-class masks by the fusion of foreground/background ones with information extracted from a weakly-supervised localization network.
1 code implementation • ICCV 2017 • Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, Lars Andersson
In contrast to the widely studied problem of recognizing an action given a complete sequence, action anticipation aims to identify the action from only partially available videos.
no code implementations • 17 Nov 2016 • Mohammad Sadegh Aliakbarian, Fatemehsadat Saleh, Basura Fernando, Mathieu Salzmann, Lars Petersson, Lars Andersson
We outperform the state-of-the-art methods that, as us, rely only on RGB frames as input for both action recognition and anticipation.