Presentation + Paper
21 August 2020 Demystify squeeze networks and go beyond
Author Affiliations +
Abstract
Small neural networks (NNs) that have a small model size find applications in mobile and wearable computing. One famous example is the SqueezeNet that achieves the same accuracy as the AlexNet yet has 50x fewer parameters than AlexNet. A few follow-ups and architectural variants have been inspired. They were built upon ad hoc arguments and experimentally justified. It remains a mystery why the SqueezeNet works efficiently. In this work, we attempt to provide a scientific explanation to the superior performance of the SqueezeNet. The function of the fire module, which is a key component of the SqueezeNet, is analyzed in detail. We study the evolution of cross-entropy values across layers and use visualization tools to shed light on its behavior with several illustrative examples.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ruiyuan Lin, Yuhang Xu, Hamza Ghani, Muhan Li, and C.-C. Jay Kuo "Demystify squeeze networks and go beyond", Proc. SPIE 11510, Applications of Digital Image Processing XLIII, 115100O (21 August 2020); https://doi.org/10.1117/12.2567544
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Neural networks

Dimension reduction

Machine learning

Computer vision technology

Convolutional neural networks

Back to Top