|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a novel algorithm for anomaly detection on very large datasets and data streams.
During the off-line training phase, an effective sampling strategy is introduced to control this distribution and make the model focus on the semantic distractors.
Ranked #9 on Visual Object Tracking on VOT2017/18
Most of the proposed person re-identification algorithms conduct supervised training and testing on single labeled datasets with small size, so directly deploying these trained models to a large-scale real-world camera network may lead to poor performance due to underfitting.
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production.
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones.
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.
In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL).