Search Results for author: Matthew Evanusa

Found 7 papers, 2 papers with code

ProtoVAE: Prototypical Networks for Unsupervised Disentanglement

no code implementations16 May 2023 Vaishnavi Patil, Matthew Evanusa, Joseph JaJa

Generative modeling and self-supervised learning have in recent years made great strides towards learning from data in a completely unsupervised way.

Disentanglement Metric Learning +1

DOT-VAE: Disentangling One Factor at a Time

no code implementations19 Oct 2022 Vaishnavi Patil, Matthew Evanusa, Joseph JaJa

One promising approach to this endeavour is the problem of Disentanglement, which aims at learning the underlying generative latent factors, called the factors of variation, of the data and encoding them in disjoint latent representations.

Disentanglement

Hybrid Backpropagation Parallel Reservoir Networks

no code implementations27 Oct 2020 Matthew Evanusa, Snehesh Shrestha, Michelle Girvan, Cornelia Fermüller, Yiannis Aloimonos

In many real-world applications, fully-differentiable RNNs such as LSTMs and GRUs have been widely deployed to solve time series learning tasks.

EEG Emotion Recognition +4

Deep Reservoir Networks with Learned Hidden Reservoir Weights using Direct Feedback Alignment

no code implementations13 Oct 2020 Matthew Evanusa, Cornelia Fermüller, Yiannis Aloimonos

Deep Reservoir Computing has emerged as a new paradigm for deep learning, which is based around the reservoir computing principle of maintaining random pools of neurons combined with hierarchical deep learning.

Time Series Time Series Prediction

A Deep 2-Dimensional Dynamical Spiking Neuronal Network for Temporal Encoding trained with STDP

no code implementations1 Sep 2020 Matthew Evanusa, Cornelia Fermuller, Yiannis Aloimonos

Here we show that a large, deep layered SNN with dynamical, chaotic activity mimicking the mammalian cortex with biologically-inspired learning rules, such as STDP, is capable of encoding information from temporal data.

Event-based attention and tracking on neuromorphic hardware

1 code implementation9 Jul 2019 Alpha Renner, Matthew Evanusa, Yulia Sandamirskaya

We present a fully event-driven vision and processing system for selective attention and tracking, realized on a neuromorphic processor Loihi interfaced to an event-based Dynamic Vision Sensor DAVIS.

Object Object Tracking

Network Deconvolution

5 code implementations ICLR 2020 Chengxi Ye, Matthew Evanusa, Hua He, Anton Mitrokhin, Tom Goldstein, James A. Yorke, Cornelia Fermüller, Yiannis Aloimonos

Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.