Search Results for author: Adrian G. Bors

Found 21 papers, 12 papers with code

Masked Image Residual Learning for Scaling Deeper Vision Transformers

1 code implementation NeurIPS 2023 Guoxi Huang, Hongtao Fu, Adrian G. Bors

With the same level of computational complexity as ViT-Base and ViT-Large, we instantiate 4. 5$\times$ and 2$\times$ deeper ViTs, dubbed ViT-S-54 and ViT-B-48.

object-detection Object Detection +3

Self-Evolved Dynamic Expansion Model for Task-Free Continual Learning

1 code implementation ICCV 2023 Fei Ye, Adrian G. Bors

In this paper, we propose a novel and effective framework for TFCL, which dynamically expands the architecture of a DEM model through a self-assessment mechanism evaluating the diversity of knowledge among existing experts as expansion signals.

Continual Learning Transfer Learning

Wasserstein Expansible Variational Autoencoder for Discriminative and Generative Continual Learning

1 code implementation ICCV 2023 Fei Ye, Adrian G. Bors

Despite promising achievements by the Variational Autoencoder (VAE) mixtures in continual learning, such methods ignore the redundancy among the probabilistic representations of their components when performing model expansion, leading to mixture components learning similar tasks.

Continual Learning

Dynamic Appearance: A Video Representation for Action Recognition with Joint Training

no code implementations23 Nov 2022 Guoxi Huang, Adrian G. Bors

Static appearance of video may impede the ability of a deep neural network to learn motion-relevant features in video action recognition.

Action Recognition Temporal Action Localization +1

Task-Free Continual Learning via Online Discrepancy Distance Learning

no code implementations12 Oct 2022 Fei Ye, Adrian G. Bors

This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.

Continual Learning Generalization Bounds

Continual Variational Autoencoder Learning via Online Cooperative Memorization

1 code implementation20 Jul 2022 Fei Ye, Adrian G. Bors

Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks.

Continual Learning Memorization

BQN: Busy-Quiet Net Enabled by Motion Band-Pass Module for Action Recognition

no code implementations TIP 2022 Guoxi Huang, Adrian G. Bors

Through experiments we show that the proposed MBPM can be used as a plug-in module in various CNN backbone architectures, significantly boosting their performance.

Action Recognition

Learning an evolved mixture model for task-free continual learning

no code implementations11 Jul 2022 Fei Ye, Adrian G. Bors

In this paper, we address a more challenging and realistic setting in CL, namely the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.

Continual Learning

Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model

1 code implementation25 Mar 2022 Fei Ye, Adrian G. Bors

In this article, we provide the appendix for Lifelong Generative Modelling Using Dynamic Expansion Graph Model.

Lifelong Generative Modelling Using Dynamic Expansion Graph Model

1 code implementation15 Dec 2021 Fei Ye, Adrian G. Bors

In this paper we study the forgetting behaviour of VAEs using a joint GR and ENA methodology, by deriving an upper bound on the negative marginal log-likelihood.

Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process

1 code implementation ICCV 2021 Fei Ye, Adrian G. Bors

Recent research efforts in lifelong learning propose to grow a mixture of models to adapt to an increasing number of tasks.

Lifelong Teacher-Student Network Learning

1 code implementation9 Jul 2021 Fei Ye, Adrian G. Bors

While the Student module is trained with a new given database, the Teacher module would remind the Student about the information learnt in the past.

Generative Adversarial Network

Lifelong Twin Generative Adversarial Networks

no code implementations9 Jul 2021 Fei Ye, Adrian G. Bors

In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs).

Knowledge Distillation

InfoVAEGAN : learning joint interpretable representations by information maximization and maximum likelihood

no code implementations9 Jul 2021 Fei Ye, Adrian G. Bors

Learning disentangled and interpretable representations is an important step towards accomplishing comprehensive data representations on the manifold.

Representation Learning

Lifelong Mixture of Variational Autoencoders

1 code implementation9 Jul 2021 Fei Ye, Adrian G. Bors

The mixing coefficients in the mixture, control the contributions of each expert in the goal representation.

Busy-Quiet Video Disentangling for Video Classification

2 code implementations29 Mar 2021 Guoxi Huang, Adrian G. Bors

We design a trainable Motion Band-Pass Module (MBPM) for separating busy information from quiet information in raw video data.

Action Classification Action Recognition In Videos +3

Learning latent representations across multiple data domains using Lifelong VAEGAN

1 code implementation ECCV 2020 Fei Ye, Adrian G. Bors

The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains.

Representation Learning

Region-based Non-local Operation for Video Classification

1 code implementation17 Jul 2020 Guoxi Huang, Adrian G. Bors

Convolutional Neural Networks (CNNs) model long-range dependencies by deeply stacking convolution operations with small window sizes, which makes the optimizations difficult.

Action Classification Action Recognition In Videos +4

Generating Memorable Images Based on Human Visual Memory Schemas

no code implementations6 May 2020 Cameron Kyle-Davidson, Adrian G. Bors, Karla K. Evans

The VMS model is based upon the results of memory experiments conducted on human observers, and provides a 2D map of memorability.

Learning spatio-temporal representations with temporal squeeze pooling

no code implementations11 Feb 2020 Guoxi Huang, Adrian G. Bors

In this paper, we propose a new video representation learning method, named Temporal Squeeze (TS) pooling, which can extract the essential movement information from a long sequence of video frames and map it into a set of few images, named Squeezed Images.

Ranked #43 on Action Recognition on UCF101 (using extra training data)

Action Recognition Classification +3

Defining Image Memorability using the Visual Memory Schema

no code implementations5 Mar 2019 Erdem Akagunduz, Adrian G. Bors, Karla K. Evans

Memorability of an image is a characteristic determined by the human observers' ability to remember images they have seen.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.