1 code implementation • 4 Jan 2024 • Abien Fred Agarap, Arnulfo P. Azcarraga
We improve on these aforementioned ways for combining a group of neural networks by using a k-Winners-Take-All (kWTA) activation function, that acts as the combination method for the outputs of each sub-network in the ensemble.
no code implementations • 23 Jul 2021 • Abien Fred Agarap
We define disentanglement as how far class-different data points from each other are, relative to the distances among class-similar data points.
1 code implementation • 5 Jun 2020 • Abien Fred Agarap, Arnulfo P. Azcarraga
Deep clustering algorithms combine representation learning and clustering by jointly optimizing a clustering loss and a non-clustering loss.
Ranked #1 on Image Clustering on EMNIST-Balanced
1 code implementation • 8 May 2018 • Abien Fred Agarap
Understanding customer sentiments is of paramount importance in marketing strategies today.
1 code implementation • 22 Mar 2018 • Abien Fred Agarap
We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN).
1 code implementation • 31 Dec 2017 • Abien Fred Agarap
We envision an intelligent anti-malware system that utilizes the power of deep learning (DL) models.
Ranked #2 on Malware Classification on Malimg Dataset (Accuracy metric)
2 code implementations • 10 Dec 2017 • Abien Fred Agarap
Empirical data has shown that the CNN-SVM model was able to achieve a test accuracy of ~99. 04% using the MNIST dataset (LeCun, Cortes, and Burges, 2010).
1 code implementation • 20 Nov 2017 • Abien Fred Agarap
The hyper-parameters used for all the classifiers were manually assigned.
5 code implementations • 10 Sep 2017 • Abien Fred Agarap
Conventionally, like most neural networks, both of the aforementioned RNN variants employ the Softmax function as its final output layer for its prediction, and the cross-entropy function for computing its loss.
Ranked #1 on Intrusion Detection on 20NewsGroups