Presentation + Paper
20 September 2020 A multi-sensorial approach for the protection of operational vehicles by detection and classification of small flying objects
Author Affiliations +
Proceedings Volume 11538, Electro-Optical Remote Sensing XIV; 1153807 (2020) https://doi.org/10.1117/12.2573620
Event: SPIE Security + Defence, 2020, Online Only
Abstract
Due to the high availability and the easy handling of small drones, the number of reported incidents caused by UAVs both intentionally and accidentally is increasing. To be capable to prevent such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object poses a potential threat and therefore the object not only has to be detected, but also classified or identified. Typical 360◦ scanning LiDAR systems can be deployed to detect and track small objects in the 3D sensor data at ranges of up to 50 m. Unfortunately, in most cases the verification and classification of the detected objects is not possible due to the low resolution of this type of sensor. In high-resolution 2D images, a differentiation of flying objects seems to be more practical, and at least cameras in the visible spectrum are well established and inexpensive. The major drawback of this type of sensor is the dependence on an adequate illumination. An active illumination could be a solution to this problem, but it is usually impossible to illuminate the scene permanently. A more practical way would be to select a sensor with a different spectral sensitivity, for example in the thermal IR. In this paper, we present an approach for a complete chain of detection, tracking and classification of small flying objects such as micro UAVs or birds, using a mobile multi-sensor platform with two 360◦ LiDAR scanners and pan-and-tilt cameras in the visible and thermal IR spectrum. The flying objects are initially detected and tracked in 3D LiDAR data. After detection, the cameras (a grayscale camera in the visible spectrum and a bolometer sensitive in the wavelength range of 7.5 µm to 14 µm) are automatically pointed to the object’s position, and each sensor records a 2D image. A convolutional neural network (CNN) realizes the identification of the region of interest (ROI) as well as the object classification (we consider classes of eight different types of UAVs and birds). In particular, we compare the classification results of the CNN for the two camera types, i.e. for the different wavelengths. The high number of training data for the CNN as well as the test data used for the experiments described in this paper were recorded at a field trial of the NATO group SET-260 (“Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment”) at CENZUB, Sissonne, France.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marcus Hammer, Björn Borgmann, Marcus Hebel, and Michael Arens "A multi-sensorial approach for the protection of operational vehicles by detection and classification of small flying objects", Proc. SPIE 11538, Electro-Optical Remote Sensing XIV, 1153807 (20 September 2020); https://doi.org/10.1117/12.2573620
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Unmanned aerial vehicles

Visible radiation

Cameras

Thermography

LIDAR

Bolometers

RELATED CONTENT


Back to Top