1 code implementation • NeurIPS 2023 • Guoxi Huang, Hongtao Fu, Adrian G. Bors
With the same level of computational complexity as ViT-Base and ViT-Large, we instantiate 4. 5$\times$ and 2$\times$ deeper ViTs, dubbed ViT-S-54 and ViT-B-48.
1 code implementation • ICCV 2023 • Fei Ye, Adrian G. Bors
In this paper, we propose a novel and effective framework for TFCL, which dynamically expands the architecture of a DEM model through a self-assessment mechanism evaluating the diversity of knowledge among existing experts as expansion signals.
1 code implementation • ICCV 2023 • Fei Ye, Adrian G. Bors
Despite promising achievements by the Variational Autoencoder (VAE) mixtures in continual learning, such methods ignore the redundancy among the probabilistic representations of their components when performing model expansion, leading to mixture components learning similar tasks.
no code implementations • 23 Nov 2022 • Guoxi Huang, Adrian G. Bors
Static appearance of video may impede the ability of a deep neural network to learn motion-relevant features in video action recognition.
no code implementations • 12 Oct 2022 • Fei Ye, Adrian G. Bors
This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.
1 code implementation • 20 Jul 2022 • Fei Ye, Adrian G. Bors
Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks.
no code implementations • TIP 2022 • Guoxi Huang, Adrian G. Bors
Through experiments we show that the proposed MBPM can be used as a plug-in module in various CNN backbone architectures, significantly boosting their performance.
no code implementations • 11 Jul 2022 • Fei Ye, Adrian G. Bors
In this paper, we address a more challenging and realistic setting in CL, namely the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.
1 code implementation • 25 Mar 2022 • Fei Ye, Adrian G. Bors
In this article, we provide the appendix for Lifelong Generative Modelling Using Dynamic Expansion Graph Model.
1 code implementation • 15 Dec 2021 • Fei Ye, Adrian G. Bors
In this paper we study the forgetting behaviour of VAEs using a joint GR and ENA methodology, by deriving an upper bound on the negative marginal log-likelihood.
1 code implementation • ICCV 2021 • Fei Ye, Adrian G. Bors
Recent research efforts in lifelong learning propose to grow a mixture of models to adapt to an increasing number of tasks.
1 code implementation • 9 Jul 2021 • Fei Ye, Adrian G. Bors
While the Student module is trained with a new given database, the Teacher module would remind the Student about the information learnt in the past.
no code implementations • 9 Jul 2021 • Fei Ye, Adrian G. Bors
In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs).
no code implementations • 9 Jul 2021 • Fei Ye, Adrian G. Bors
Learning disentangled and interpretable representations is an important step towards accomplishing comprehensive data representations on the manifold.
1 code implementation • 9 Jul 2021 • Fei Ye, Adrian G. Bors
The mixing coefficients in the mixture, control the contributions of each expert in the goal representation.
2 code implementations • 29 Mar 2021 • Guoxi Huang, Adrian G. Bors
We design a trainable Motion Band-Pass Module (MBPM) for separating busy information from quiet information in raw video data.
Ranked #15 on Action Recognition on UCF101
1 code implementation • ECCV 2020 • Fei Ye, Adrian G. Bors
The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains.
1 code implementation • 17 Jul 2020 • Guoxi Huang, Adrian G. Bors
Convolutional Neural Networks (CNNs) model long-range dependencies by deeply stacking convolution operations with small window sizes, which makes the optimizations difficult.
Ranked #32 on Action Recognition on Something-Something V1
no code implementations • 6 May 2020 • Cameron Kyle-Davidson, Adrian G. Bors, Karla K. Evans
The VMS model is based upon the results of memory experiments conducted on human observers, and provides a 2D map of memorability.
no code implementations • 11 Feb 2020 • Guoxi Huang, Adrian G. Bors
In this paper, we propose a new video representation learning method, named Temporal Squeeze (TS) pooling, which can extract the essential movement information from a long sequence of video frames and map it into a set of few images, named Squeezed Images.
Ranked #43 on Action Recognition on UCF101 (using extra training data)
no code implementations • 5 Mar 2019 • Erdem Akagunduz, Adrian G. Bors, Karla K. Evans
Memorability of an image is a characteristic determined by the human observers' ability to remember images they have seen.