OpenMix+: Revisiting Data Augmentation for Open Set Recognition

Open set recognition requires models to recognize samples of known classes learned in the training set while reject unknowns not learned. Compared with the structural risk minimization theory for closed-set problems, structural risk in open set tasks remains rarely explored. In this paper, we point out that balancing between structural risk and open space risk is crucial for open set recognition, and re-formalize it as open set structural risk. This brings a new view towards the general relationship between closed set recognition and open set recognition against the common intuition, which argues that a good closed set classifier always benefits for open set recognition. Specifically, we theoretically and experimentally show that recent mix-based data augmentation methods are aggressive closed set regularization methods, which reduce structural risk at cost of sacrificing open space risk. Besides, we show that existing negative data augmentation designed for open space risk reduction also ignore the trade-off problem between structural risk and open space risk, which limits their performance. We propose an efficient negative data augmentation strategy named self-mix and a corresponding method named OpenMix. OpenMix generates high-quality negative samples by mixing samples themselves, which can take care of both risks simultaneously. When combining OpenMix with conservative closed set regularization methods to form OpenMix+, models can achieve lower open set structural risk. Extensive experiments validate the superiority of OpenMix and OpenMix+ in terms of both effectiveness and universality.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here