Deep neural networks (DNNs) have been successfully applied to solve complex problems, such as pattern recognition when analyzing big data. To achieve a good computational performance, these networks are often designed such that they contain a large number of trainable parameters. However, by doing so, DNNs are often very energy-intensive and time-consuming to train. In this work, we propose to use a photonic reservoir to preprocess the input data instead of directly injecting it into the DNN. A photonic reservoir consists of a network of many randomly connected nodes which do not need to be trained. It forms an additional layer to the deep neural network and can transform the input data into a state in a higher dimensional state-space. This allows us to reduce the size of the DNN, and the amount of training required for the DNN. We test this assumption using numerical simulations that show that such a photonic reservoir as preprocessor results in an improved performance, shown by a lower test error, for a deep neural network, when tested on the one-step ahead prediction task of the Santa Fe time-series. The performance of the stand-alone DNN is poor on this task, resulting in a high test error. As we also discuss in detail in [Bauwens et al, Frontiers in Physics 10, 1051941 (2022)], we conclude that photonic reservoirs are well-suited as physical preprocessors to deep neural networks for tackling time-dependent tasks due to their fast computation times and low-energy consumption.
|