Paper
31 January 2020 Transfer of a high-level knowledge in HoughNet neural network
Author Affiliations +
Proceedings Volume 11433, Twelfth International Conference on Machine Vision (ICMV 2019); 1143322 (2020) https://doi.org/10.1117/12.2559454
Event: Twelfth International Conference on Machine Vision, 2019, Amsterdam, Netherlands
Abstract
In this paper, we study the recently introduced neural network architecture HoughNet for the ability to accumulate transferable high-level features. The main idea of that neural network is to use convolutional layers separated with Fast Hough Transform layers to enable an analysis of complex non-linear statistics along different lines. We show that different convolutional blocks in this neural network have essentially different purposes. While initial features extracting is task-specific, the main part of the neural network operates with high-level features and do not require re-training in order to be applied to data from a different domain. To prove our statement, we two sets of the images with different origins and demonstrate Transfer Learning presence in the neural network except for the first layers which are highly task-specific.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alexander V. Sheshkus and Dmitry Nikolaev "Transfer of a high-level knowledge in HoughNet neural network", Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 1143322 (31 January 2020); https://doi.org/10.1117/12.2559454
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Hough transforms

Image processing

Convolutional neural networks

Computer science

Back to Top