Unsupervised Learning using Pretrained CNN and Associative Memory Bank

2 May 2018Qun LiuSupratik Mukhopadhyay

Deep Convolutional features extracted from a comprehensive labeled dataset, contain substantial representations which could be effectively used in a new domain. Despite the fact that generic features achieved good results in many visual tasks, fine-tuning is required for pretrained deep CNN models to be more effective and provide state-of-the-art performance... (read more)

PDF Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT LEADERBOARD
Fine-Grained Image Classification Caltech-101 UL-Hopfield (ULH) Top-1 Error Rate 9.00% # 1
Semi-Supervised Image Classification Caltech-101 UL-Hopfield (ULH) Accuracy 91.00% # 1
Semi-Supervised Image Classification Caltech-101, 202 Labels UL-Hopfield (ULH) Accuracy 91.00% # 1
Semi-Supervised Image Classification Caltech-256 UL-Hopfield (ULH) Accuracy 77.40% # 1
Semi-Supervised Image Classification Caltech-256, 1024 Labels UL-Hopfield (ULH) Accuracy 77.40% # 1
Semi-Supervised Image Classification CIFAR-10, 40 Labels UL-Hopfield (ULH) Percentage error 16.90 # 2

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet