The electromagnetic environment is becoming increasing cluttered and complex, therefore the need to perform operations such as modulation recognition on wide bandwidth signals in real time is becoming crucial. Deep learning solutions have shown state-of-the-art performance for modulation recognition, however they are computationally expensive. In this paper we address this demand on resources by applying Quantized Neural Networks (QNNs) to the problem of modulation recognition. We demonstrate that QNN implementations of state-of-the-art Neural Networks (NNs) with different bit widths achieve varying levels of performance, on both our own 10 class dataset (up to 80% classification accuracy) and the open source 24 class DeepSig dataset (up to 75% classification accuracy at 10dB signal-to-noise ratio (SNR)). We then explore the accuracy-hardware trade-off of scaling NN size and bits when using QNNs. It was found that scaling the NN size had similar accuracy-hardware trade-offs as scaling the NN bits. However, 1-bit NNs had an unfavorable trade-off, therefore suggestions are made on how to improve the 1-bit NN architecture and training procedure. To demonstrate the potential benefits of low-precision quantized NNs and encourage further work we propose novel optimizations for 1bit and 2bit QNNs in FPGAs and ASICs that eliminate the need for weights memory and simplify the hardware required to implement a QNN. This provides further savings in hardware, allowing QNNs to provide favorable accuracy-hardware trade-offs, and supports the implementation of significantly larger NNs with a throughput of 100s of millions of inferences per second.
|