Presentation + Paper
3 May 2018 Understanding adversarial attack and defense towards deep compressed neural networks
Qi Liu, Tao Liu, Wujie Wen
Author Affiliations +
Abstract
Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting appli- cations such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of- the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely “adversarial examples”, causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Qi Liu, Tao Liu, and Wujie Wen "Understanding adversarial attack and defense towards deep compressed neural networks", Proc. SPIE 10630, Cyber Sensing 2018, 106300Q (3 May 2018); https://doi.org/10.1117/12.2305226
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Defense and security

Quantization

Systems modeling

Data modeling

Performance modeling

Neural networks

Mathematical modeling

Back to Top