We ask whether neural networks can learn to use secret keys to protect information from other neural networks.
The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence.
In this paper, an unsupervised steganalysis method that combines artificial training setsand supervised classification is proposed.
Image steganography is a procedure for hiding messages inside pictures.
Furthermore, in two out of three crossovers, the "left-to-right" version performs better than the "shuffled" version.
In this context, N is the length of the period of the generated sequence, and the policy is iteratively improved using the average value of an appropriate test suite run over that period.
We apply our methodology to two major ML algorithms, namely non-negative matrix factorization (NMF) and singular value decomposition (SVD).