One-class classifier
24 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in One-class classifier
Latest papers
Generative Semi-supervised Graph Anomaly Detection
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the unsupervised setting in most GAD studies with a fully unlabeled graph.
OCGEC: One-class Graph Embedding Classification for DNN Backdoor Detection
We then pre-train a generative self-supervised graph autoencoder (GAE) to better learn the features of benign models in order to detect backdoor models without knowing the attack strategy.
UNTAG: LEARNING GENERIC FEATURES FOR UNSUPERVISED TYPE-AGNOSTIC DEEPFAKE DETECTION
This paper introduces a novel framework for unsupervised type-agnostic deepfake detection called UNTAG.
Calibrated One-class Classification for Unsupervised Time Series Anomaly Detection
Our one-class classifier is calibrated in two ways: (1) by adaptively penalizing uncertain predictions, which helps eliminate the impact of anomaly contamination while accentuating the predictions that the one-class model is confident in, and (2) by discriminating the normal samples from native anomaly examples that are generated to simulate genuine time series abnormal behaviors on the basis of original data.
Near out-of-distribution detection for low-resolution radar micro-Doppler signatures
We emphasize the relevance of OODD and its specific supervision requirements for the detection of a multimodal, diverse targets class among other similar radar targets and clutter in real-life critical systems.
SIFT and SURF based feature extraction for the anomaly detection
In this paper, we suggest a way, how to use SIFT and SURF algorithms to extract the image features for anomaly detection.
Exemplar-free Class Incremental Learning via Discriminative and Comparable One-class Classifiers
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e. g. deep SVDD), because a trained VAE can not only identify the probability of an input sample belonging to a class but also generate pseudo samples of the class to assist in learning new tasks.
Shell Theory: A Statistical Model of Reality
The foundational assumption of machine learning is that the data under consideration is separable into classes; while intuitively reasonable, separability constraints have proven remarkably difficult to formulate mathematically.
CutPaste: Self-Supervised Learning for Anomaly Detection and Localization
We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data.
Learning and Evaluating Representations for Deep One-class Classification
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.