RMDL: Random Multimodel Deep Learning for Classification

3 May 2018  ยท  Kamran Kowsari, Mojtaba Heidarysafa, Donald E. Brown, Kiana Jafari Meimandi, Laura E. Barnes ยท

The continually increasing number of complex datasets each year necessitates ever improving machine learning methods for robust and accurate categorization of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness and accuracy through ensembles of deep learning architectures. RDML can accept as input a variety data to include text, video, images, and symbolic. This paper describes RMDL and shows test results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB, and 20newsgroup. These test results show that RDML produces consistently better performance than standard methods over a broad range of data types and classification problems.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Classification 20NEWS RMDL (15 RDLs) Accuracy 87.91 # 6
Image Classification CIFAR-10 RMDL (30 RDLs) Percentage correct 91.21 # 173
Hierarchical Text Classification of Blurbs (GermEval 2019) LOCAL DATASET RMDL (15 RDLs Accuracy (%) 90.79 # 1
Unsupervised Pre-training Measles RMDL Accuracy (%) 0.1 # 5
Image Classification MNIST RMDL (30 RDLs) Percentage error 0.18 # 5
Accuracy 99.82 # 5
Unsupervised Pre-training UCI measles RMDL 3 RDLs Sensitivity 0.8739 # 2
Unsupervised Pre-training UCI measles RMDL (30 RDLs) Sensitivity (VEB) 90.69 # 1
Unsupervised Pre-training UCI measles Sensitivity 89.1 # 1

Methods


No methods listed for this paper. Add relevant methods here