A number of research papers has been published using the architecture of adversarial neural networks to prove that communication between two neural net based on synchronized input can be achieved, and without knowledge of this synchronized information these systems can not be breached. In this paper we will try to evaluate these adversarial neural net architectures when a third party gain access to partial secret key, or a noisy secret key, or has knowledge about loss function, or loss values itself, or activation functions used during training of encryption layers. We explore the cryptanalysis side of it in which we will focus on vulnerabilities a neural-net based cryptography network can face. This can be used in future to improve the current neural net based cryptography architectures. In this paper we show that while the encryption key is necessary to decrypt the messages in neural network domain, the adversarial neural networks can occasionally decrypt messages or raise a concern which will require further training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.