Deep learning-based Single Image Super-Resolution (SISR) has recently provided great performances when compared with state-of-the-art approaches. However, these performances are usually at the cost of high computational complexity and memory management, even at the inference stage. In this paper, we aim to reduce the structural complexity of a state-of-the-art Deep Neural Networks (DNN)1 approach to propose a cost-effective solution to the problem of SISR. We have investigated how the different components of the model (baseline) may affect the overall complexity while minimizing the negative effect on its quality performances. This has provided a solution, which yields quality performances comparable to the baseline model, while approximately reducing more than one order of magnitude the number of parameters, the spatial complexity (GPU memory) up to 1/6 and inference time by 1/2.
|