Competitions with currently unpublished results:
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.
Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability.
Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network.
Medical imaging AI systems such as disease classification and segmentation are increasingly inspired and transformed from computer vision based AI systems.
That is, PAT generalizes well to unforeseen perturbation types.
The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.