no code implementations • Ying Ma, Jose Principe
An increasing number of neural memory networks have been developed, leading to the need for a systematic approach to analyze and compare their underlying memory capabilities.
no code implementations • 28 Jul 2023 • Ran Dou, Jose Principe
We propose a new perspective to analyze the hidden state space based on an eigen decomposition of the weight matrix.
no code implementations • 28 Jul 2023 • Ran Dou, Jose Principe
In this paper, we propose a new event memory architecture (MemNet) for recurrent neural networks, which is universal for different types of time series data such as scalar, multivariate or symbolic.
1 code implementation • 16 Jan 2023 • Hongming Li, Shujian Yu, Jose Principe
We propose causal recurrent variational autoencoder (CR-VAE), a novel generative model that is able to learn a Granger causal graph from a multivariate time series x and incorporates the underlying causal mechanism into its data generation process.
no code implementations • 27 Feb 2022 • Leila Kalantari, Jose Principe, Kathryn E. Sieving
We seek to develop simultaneous segmentation and classification of notes from audio recordings in presence of outliers.
no code implementations • 29 Aug 2021 • Leila Kalantari, Jose Principe, Kathryn E. Sieving
MDD-KM provides uncertainty quantification and can be deployed to build classification systems for the realistic scenario where out-of-distribution (OOD) samples are present among the test data.
1 code implementation • 12 May 2020 • Shiyu Duan, Shujian Yu, Jose Principe
By redefining the conventional notions of layers, we present an alternative view on finitely wide, fully trainable deep neural networks as stacked linear models in feature spaces, leading to a kernel machine interpretation.
no code implementations • 25 Sep 2019 • Kristoffer Wickstrøm, Sigurd Løkse, Michael Kampffmeyer, Shujian Yu, Jose Principe, Robert Jenssen
In this paper, we propose an IP analysis using the new matrix--based R\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.
no code implementations • 25 Sep 2019 • Kristoffer Wickstrøm, Sigurd Løkse, Michael Kampffmeyer, Shujian Yu, Jose Principe, Robert Jenssen
In this paper, we propose an IP analysis using the new matrix--based R\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.
no code implementations • ICLR 2019 • Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe
Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.
no code implementations • 1 May 2018 • Ying Ma, Jose Principe
The taxonomy includes all the popular memory networks: vanilla recurrent neural network (RNN), long short term memory (LSTM ), neural stack and neural Turing machine and their variants.
1 code implementation • ICLR 2019 • Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe
With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons.
no code implementations • 3 May 2017 • Zheng Cao, Shujian Yu, Bing Ouyang, Fraser Dalgleish, Anni Vuorenkoski, Gabriel Alsenas, Jose Principe
Depending on the quantity and properties of acquired imagery, the animals are characterized as either features (shape, color, texture, etc.
no code implementations • NeurIPS 2010 • Sohan Seth, Park Il, Austin Brockmeier, Mulugeta Semework, John Choi, Joseph Francis, Jose Principe
However, these statistics do not fully describe a point process and thus the tests can be misleading.