We ask whether neural networks can learn to use secret keys to protect information from other neural networks.
The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence.
In this paper, an unsupervised steganalysis method that combines artificial training setsand supervised classification is proposed.
Image steganography is a procedure for hiding messages inside pictures.
Furthermore, in two out of three crossovers, the "left-to-right" version performs better than the "shuffled" version.
In this paper, we propose a novel privacy-preserving deep learning model and a secure training/inference scheme to protect the input, the output, and the model in the application of the neural network.
Substitution Boxes (S-boxes) are nonlinear objects often used in the design of cryptographic algorithms.
We apply our methodology to two major ML algorithms, namely non-negative matrix factorization (NMF) and singular value decomposition (SVD).