Search Results for author: Alexander Ororbia

Found 34 papers, 6 papers with code

Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems

no code implementations23 Mar 2024 Viet Dung Nguyen, Reynold Bailey, Gabriel J. Diaz, Chengyi Ma, Alexander Fix, Alexander Ororbia

In remedy, we use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data, and to prune the training dataset in a manner that maximizes distribution overlap.

Dimensionality Reduction Domain Adaptation +2

Neuro-mimetic Task-free Unsupervised Online Learning with Continual Self-Organizing Maps

no code implementations19 Feb 2024 Hitesh Vaidya, Travis Desell, Ankur Mali, Alexander Ororbia

The major challenge that makes crafting such a system difficult is known as catastrophic forgetting - an agent, such as one based on artificial neural networks (ANNs), struggles to retain previously acquired knowledge when learning from new samples.

Class Incremental Learning Dimensionality Reduction +2

A Review of Neuroscience-Inspired Machine Learning

no code implementations16 Feb 2024 Alexander Ororbia, Ankur Mali, Adam Kohan, Beren Millidge, Tommaso Salvatori

As a result, it accommodates hardware and scientific modeling, e. g. learning with physical systems and non-differentiable behavior.

Minimally Supervised Learning using Topological Projections in Self-Organizing Maps

no code implementations12 Jan 2024 Zimeng Lyu, Alexander Ororbia, Rui Li, Travis Desell

In this work, we introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs), which significantly reduces the required number of labeled data points to perform parameter prediction, effectively exploiting information contained in large unlabeled datasets.

Decision Making Parameter Prediction +1

Mortal Computation: A Foundation for Biomimetic Intelligence

no code implementations16 Nov 2023 Alexander Ororbia, Karl Friston

This review motivates and synthesizes research efforts in neuroscience-inspired artificial intelligence and biomimetic computing in terms of mortal computation.

A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian Learning and Free Energy Minimization

no code implementations14 Oct 2023 Alexander Ororbia, Mary Alexandria Kelly

Over the last few years, large neural generative models, capable of synthesizing semantically rich passages of text or producing complex images, have recently emerged as a popular representation of what has come to be known as ``generative artificial intelligence'' (generative AI).

On the Computational Complexity and Formal Hierarchy of Second Order Recurrent Neural Networks

no code implementations26 Sep 2023 Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles

In this work, we extend the theoretical foundation for the $2^{nd}$-order recurrent network ($2^{nd}$ RNN) and prove there exists a class of a $2^{nd}$ RNN that is Turing-complete with bounded time.

Backpropagation-Free 4D Continuous Ant-Based Neural Topology Search

1 code implementation11 May 2023 AbdElRahman ElSaid, Karl Ricanek, Zeming Lyu, Alexander Ororbia, Travis Desell

Continuous Ant-based Topology Search (CANTS) is a previously introduced novel nature-inspired neural architecture search (NAS) algorithm that is based on ant colony optimization (ACO).

Neural Architecture Search

Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of Spiking Neural Systems

no code implementations30 Mar 2023 Alexander Ororbia

We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel and adapt their synaptic efficacies without the use of feedback pathways.

Online Evolutionary Neural Architecture Search for Multivariate Non-Stationary Time Series Forecasting

no code implementations20 Feb 2023 Zimeng Lyu, Alexander Ororbia, Travis Desell

Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods, including online linear regression, fixed long short-term memory (LSTM) and gated recurrent unit (GRU) models trained online, as well as state-of-the-art, online ARIMA strategies.

Neural Architecture Search Time Series +1

The Predictive Forward-Forward Algorithm

1 code implementation4 Jan 2023 Alexander Ororbia, Ankur Mali

We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.

Convolutional Neural Generative Coding: Scaling Predictive Coding to Natural Images

no code implementations22 Nov 2022 Alexander Ororbia, Ankur Mali

In this work, we develop convolutional neural generative coding (Conv-NGC), a generalization of predictive coding to the case of convolution/deconvolution-based computation.

Image Denoising

A Neural Active Inference Model of Perceptual-Motor Learning

no code implementations16 Nov 2022 Zhizhuo Yang, Gabriel J. Diaz, Brett R. Fajen, Reynold Bailey, Alexander Ororbia

The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience that can produce human-like behavior through reward-based learning.

Active Predicting Coding: Brain-Inspired Reinforcement Learning for Sparse Reward Robotic Control Problems

no code implementations19 Sep 2022 Alexander Ororbia, Ankur Mali

In this article, we propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC), designing an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards, embodying the principles of planning-as-inference.

reinforcement-learning Reinforcement Learning (RL)

Maze Learning using a Hyperdimensional Predictive Processing Cognitive Architecture

no code implementations31 Mar 2022 Alexander Ororbia, M. Alex Kelly

We present the COGnitive Neural GENerative system (CogNGen), a cognitive architecture that combines two neurobiologically-plausible, computational models: predictive processing and hyperdimensional/vector-symbolic models.

reinforcement-learning Reinforcement Learning (RL) +1

An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy Image Compression Systems

no code implementations27 Jan 2022 Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles

Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.

Image Compression

Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG Encoder-Decoder

no code implementations27 Jan 2022 Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles

In light of this, we propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends, an approach we call Neural JPEG.

Image Compression MS-SSIM +2

Reducing Catastrophic Forgetting in Self Organizing Maps with Internally-Induced Generative Replay

no code implementations9 Dec 2021 Hitesh Vaidya, Travis Desell, Alexander Ororbia

A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.

Dimensionality Reduction

FBERT: A Neural Transformer for Identifying Offensive Content

no code implementations Findings (EMNLP) 2021 Diptanu Sarkar, Marcos Zampieri, Tharindu Ranasinghe, Alexander Ororbia

Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media.

Language Identification XLM-R

Backprop-Free Reinforcement Learning with Active Neural Generative Coding

no code implementations10 Jul 2021 Alexander Ororbia, Ankur Mali

In humans, perceptual awareness facilitates the fast recognition and extraction of information from sensory input.

Q-Learning reinforcement-learning +1

Towards a Predictive Processing Implementation of the Common Model of Cognition

no code implementations15 May 2021 Alexander Ororbia, M. A. Kelly

In this article, we present a cognitive architecture that is built from powerful yet simple neural models.

WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans

1 code implementation SEMEVAL 2021 Tharindu Ranasinghe, Diptanu Sarkar, Marcos Zampieri, Alexander Ororbia

In recent years, the widespread use of social media has led to an increase in the generation of toxic and offensive content on online platforms.

Toxic Spans Detection

Recognizing and Verifying Mathematical Equations using Multiplicative Differential Neural Units

no code implementations7 Apr 2021 Ankur Mali, Alexander Ororbia, Daniel Kifer, C. Lee Giles

Two particular tasks that test this type of reasoning are (1) mathematical equation verification, which requires determining whether trigonometric and linear algebraic statements are valid identities or not, and (2) equation completion, which entails filling in a blank within an expression to make it true.

Mathematical Reasoning

The Neural Coding Framework for Learning Generative Models

no code implementations7 Dec 2020 Alexander Ororbia, Daniel Kifer

Neural generative models can be used to learn complex probability distributions from data, to sample from them, and to produce probability density estimates.

Continuous Ant-Based Neural Topology Search

no code implementations21 Nov 2020 AbdElRahman ElSaid, Joshua Karns, Zimeng Lyu, Alexander Ororbia, Travis Desell

This work introduces a novel, nature-inspired neural architecture search (NAS) algorithm based on ant colony optimization, Continuous Ant-based Neural Topology Search (CANTS), which utilizes synthetic ants that move over a continuous search space based on the density and distribution of pheromones, is strongly inspired by how ants move in the real world.

Neural Architecture Search Time Series +1

Recognizing Long Grammatical Sequences Using Recurrent Networks Augmented With An External Differentiable Stack

no code implementations4 Apr 2020 Ankur Mali, Alexander Ororbia, Daniel Kifer, Clyde Lee Giles

In this paper, we improve the memory-augmented RNN with important architectural and state updating mechanisms that ensure that the model learns to properly balance the use of its latent states with external memory.

Language Modelling Machine Translation +1

Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment

no code implementations10 Feb 2020 Alexander Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles

Training deep neural networks on large-scale datasets requires significant hardware resources whose costs (even on cloud platforms) put them out of reach of smaller organizations, groups, and individuals.

The Neural State Pushdown Automata

no code implementations7 Sep 2019 Ankur Mali, Alexander Ororbia, C. Lee Giles

The NSPDA is also compared to a classical analog stack neural network pushdown automaton (NNPDA) as well as a wide array of first and second-order RNNs with and without external memory, trained using different learning algorithms.

Incremental Learning Tensor Networks

Spiking Neural Predictive Coding for Continual Learning from Data Streams

no code implementations23 Aug 2019 Alexander Ororbia

For energy-efficient computation in specialized neuromorphic hardware, we present spiking neural coding, an instantiation of a family of artificial neural models grounded in the theory of predictive coding.

Continual Learning

Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting

no code implementations25 May 2019 Alexander Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles

In lifelong learning systems based on artificial neural networks, one of the biggest obstacles is the inability to retain old knowledge as new information is encountered.

Investigating Recurrent Neural Network Memory Structures using Neuro-Evolution

1 code implementation6 Feb 2019 Alexander Ororbia, Ahmed Ahmed Elsaid, Travis Desell

This paper presents a new algorithm, Evolutionary eXploration of Augmenting Memory Models (EXAMM), which is capable of evolving recurrent neural networks (RNNs) using a wide variety of memory structures, such as Delta-RNN, GRU, LSTM, MGU and UGRNN cells.

Time Series Time Series Analysis

Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations

1 code implementation17 Oct 2018 Alexander Ororbia, Ankur Mali, C. Lee Giles, Daniel Kifer

We compare our model and learning procedure to other back-propagation through time alternatives (which also tend to be computationally expensive), including real-time recurrent learning, echo state networks, and unbiased online recurrent optimization.

Continual Learning Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.