1 code implementation • 12 Mar 2024 • Mark D. McDonnell, Dong Gong, Ehsan Abbasnejad, Anton Van Den Hengel
We show here that the combination of a large language model and an image generation model can similarly provide useful premonitions as to how a continual learning challenge might develop over time.
1 code implementation • NeurIPS 2023 • Mark D. McDonnell, Dong Gong, Amin Parveneh, Ehsan Abbasnejad, Anton Van Den Hengel
In this paper, we propose a concise and effective approach for CL with pre-trained models.
1 code implementation • 16 Jul 2019 • Mark D. McDonnell, Hesham Mostafa, Runchun Wang, Andre van Schaik
We found, following experiments with wide residual networks applied to the ImageNet, CIFAR 10 and CIFAR 100 image classification datasets, that BN layers do not consistently offer a significant advantage.
Ranked #94 on Image Classification on CIFAR-100 (using extra training data)
no code implementations • 8 Oct 2018 • Victor Stamatescu, Mark D. McDonnell
Convolutional Neural Networks (CNNs) are a class of artificial neural networks whose computational blocks use convolution, together with other linear and non-linear operations, to perform classification or regression.
5 code implementations • ICLR 2018 • Mark D. McDonnell
Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization.
no code implementations • 21 Apr 2017 • Sebastien C. Wong, Victor Stamatescu, Adam Gatt, David Kearney, Ivan Lee, Mark D. McDonnell
We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types.
no code implementations • 28 Sep 2016 • Sebastien C. Wong, Adam Gatt, Victor Stamatescu, Mark D. McDonnell
In this paper we investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier.
no code implementations • 16 Mar 2015 • Mark D. McDonnell, Tony Vladusich
We present a neural network architecture and training method designed to enable very rapid training and low implementation complexity.
Ranked #24 on Image Classification on MNIST
no code implementations • 29 Dec 2014 • Mark D. McDonnell, Migel D. Tissera, Tony Vladusich, André van Schaik, Jonathan Tapson
Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
1 code implementation • 11 Nov 2013 • Brett A. Schmerl, Mark D. McDonnell
This holds whether the firing dynamics in the model are phasic (SBSR can occur due to channel noise) or tonic (ISR can occur due to channel noise).
Neurons and Cognition Subcellular Processes