PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12275 including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ion mobility spectrometry (IMS) today figures prominently among analytical methods for detection of explosives and is widely used for transport security. This method is highly appreciated for capability to distinguish low quantity of different types of explosives at atmospheric pressure. The main obstacle for sensing some kinds of explosives is their very low vapor pressure so that the limit of detection of state-of-the-art IMS instrumentation 10-13 –10-14 g/cm3 lacks at least an order of magnitude. In this paper we combine promising UV laser radiation along with application of organic compounds (dopants) to improve ionization efficiency and vapor detection capabilities of IMS. Dopants with low ionization energy (toluene and 1-methylnaphtalene) were used for negative ion formation of nitro group-based explosives: trinitrotoluene (TNT), cyclotrimethylenetrinitramine (RDX) and pentaerythritol tetranitrate (PETN). Presence of dopants in the sample results in multiple growth of ion yield at laser intensities lower than 2 × 107 W/cm2. Limits of detection with dopant-assisted laser ionization (4.7 × 10-16 g/cm3 for RDX and 9.8 × 10-15 g/cm3 for PETN) show up to 2-fold improvement compared with no dopant case. Results propose a way to further improve sensitivity of detectors and reduce manufacturing costs by lowering requirements to laser pulse energy and using cheaper lasers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents research and development of a laser ion mobility spectrometer for detection and identification of vapors of explosives consisting of a stationary industrial laser and a light portable analytical head – detector module. Laser radiation is delivered to the ion source of the analytical head by an optical fiber. Much attention in the work is paid to the analysis of propagation of ultraviolet radiation in the standard UV quartz fiber, with an intensity sufficient for the effective ionization of molecules of explosives. It is shown, that the threshold intensity of ultraviolet radiation with λ = 266 nm, at which there is still no degradation of the optical fiber, is q=7.107 W/cm2. The process of laser desorption of molecules of explosives under ultraviolet laser irradiation from the output end of an optical fiber is considered. It is found that the output end of the fiber works as a concentrator of molecules, the desorption and ionization of which by laser radiation with λ = 266 nm leads to an additional increase in the ion current of target analyte. The main characteristics of the developed spectrometer are studied. The resolving power of the spectrometer is equal to 53. The limit of detection for TNT and RDX were estimated at the level 1-2 ppt. Fluctuation of maximum of all ion peaks positions does not exceed ±45 μs. Basically, the developed device is intended for stationary checkpoints, and, of course, it can be transported to crime scene investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, luminescent sensors for detecting vapors of nitroaromatic compounds and other explosives are being actively developed. In practice, the stability of the luminescence signal of the sensor is of great importance. In this work we study photostability of sensitive luminophore embedded in porous silicon microcavities under exciting at a wavelength of 450 nm. As the sensitive luminophore we used Poly[2-methoxy-5-(3′,7′-dimethyloctyloxy)-1,4-phenylenevinylene] (MDMOPPV). It was found that the rate of photodegradation of MDMO-PPV luminescence depends nonlinearly on the intensity of the exciting radiation. Apparently, the observed effect is related to the limited rate of diffusion of oxygen molecules into the porous silicon. Additionally, similar studies were carried out for MDMO-PPV films on a glass substrate. It is shown that in this case the photostability of the luminophore decreases by several times. Based on the data obtained, the operating modes of the sensor element are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Millimeter and sub-millimeter wave (50GHz – 20,00GHz) radiation has recently gained global attraction and is becoming more popular in the field of imaging concealed objects. We demonstrate here the employment of an inexpensive millimeter wave (MMW) imaging system using a focal plane array based on glow discharge detectors (GDDs) that can be used for these applications. The electrical detection method is used here, which refers to the detection by measuring the change in current between the GDD electrodes due to the incident radiation from an MMW source. A data acquisition (DAQ) platform is used here to acquire the readings from the sensor element, which is controlled by a LabVIEW code. The system measures the change in current passing through the GDD as a result of modulated radiation. We have implemented a DAQ platform with 8 channels that can be used to convert an analog signal to a digital one. Here we utilized a suitable digital algorithm that performs strong filtering of the noise and allows receiving a detection signal even for extremely low radiation intensities. A quasi-optical setup was composed of an MMW source, an off-axis parabolic mirror (OPM), and an imaging mirror. Calibration and alignment were carried out in order to locate the focal plane array (FPA) at the reflective focal length of the OPM. The salient advantages of the technology employed here are the low cost of detectors and the absence of a receiving antenna as exists in most detection systems. We currently construct a single row of detectors and propose to expand it to 64X64 pixels by using oversampling at sub-pixel resolution. Expansion and refinement of the concealed object detection systems can be achieved using image processing methods. The simplified version detection circuit implemented in this detection system is also capable of capturing images within a relatively short time with improved noise suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric recognition and surveillance systems based on visible band imaging cameras encounter a number of limitations such as the effects of lighting conditions and occlusions. As a result, there is an interest in beyond the visible spectrum in this field. Due to their characteristics such as transmission through common barrier materials, use of millimeter and submillimeter wave systems in the field of biometrics have been proposed in order to overcome these difficulties. This paper discusses the possibility of biometric identification of individuals using face images acquired by an active terahertz imaging system operating at 340 GHz. Optical resolution at this frequency is sufficient to provide detailed face images containing characteristic features of individual people. With the ability to penetrate through clothing materials, these characteristics are expected to be preserved in concealed face images, which would allow recognition of individuals through clothing items worn around the face. Here, we examine the effect of the concealment on THz images by comparing face images acquired with and without concealment, in terms of a future face recognition system operating at this frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Criminal activity is increasingly entering the ocean subsurface with acts such as illegal fishing and narco-submarining becoming points of contention. This among other illicit acts taking place in this domain imply a need for surveillance to render these activities apparent. However, subsurface Underwater Sensor Networking which is central to the surveillance is still generations behind terrestrial networking, therefore it is still challenging to monitor for subsurface activities. This is since the current signal transmission standard, acoustic communication, is limited in practical bandwidth and thus channel data-rate, this is, however, caveated with omni-directional propagation and supreme range rendering it reliable but incapable of carrying video or other data intensive sensor information. There is, however, an emerging technology based on optical (visible light) communication that can accommodate surveillance applications with superior data rates and energy savings. This investigation demonstrates how theoretically it is possible to achieve a network of underwater channels capable of sustaining a multimedia feed for monitoring subsurface activity using modern optical communication when in compared to an acoustic network. In addition, a simple topology was investigated that shows how the range limitations of this signaling can be extended by adding floating relay nodes. Through simulations in Network Simulator 3 (NS-3)/Aquasim-NG software it is shown that Visible Light wireless communication in visible light networks have a channel capacity high enough to carry out monitoring in strategic areas, referencing, optical modems that are available in the market. This implies that data-rates of 10 Mb/s are possible for the real-time video surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A speckle pattern is produced by the mutual interference of a set of coherent wavefronts. Speckle patterns typically occur in diffuse reflections of monochromatic light such a laser light. When a rough surface is illuminated by a coherent light is imaged, a speckle pattern is observed in the image plane. This study involves the quality assessment and authentication of security holograms and its related foils by analyzing the speckle pattern generated from the specimen itself. Speckle pattern from various type of security holograms and foils are taken. By processing the image of the speckle pattern, the size of the speckles is analyzed using MATLAB software. By evaluating the size of the speckle generated, the feasibility of analyzing the quality and authenticity of the security hologram is assessed. The paper discusses about the experimental setup, image capturing, and processing method and the result obtained in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprints provide important clues to criminal investigations. Although there are various fingerprint detection methods such as powder or liquid, optical methods are useful for non-contact and non-destructive detection. However, in case of two or more overlapping fingerprints, they might be discarded because the features cannot be assigned to the individual fingerprints. The fact that the composition of fingerprints is unique for each individual is well known, so if this causes differences in inherent emission spectra of fingerprints, it is possible to separate overlapping fingerprints. Hyperspectral imaging is used in a variety of fields and also in forensic science, such as fingerprint detection. In this study, the separation of overlapping fingerprints using multivariate analysis was performed for effective use of fingerprints. Fluorescence hyperspectral data of overlapping fingerprints excited by a 532 nm CW laser were acquired by hyperspectral imaging in the visible region. Fluorescence spectra from fingerprints were measured in the wavelength range from 560 to 700 nm with the wavelength resolution of 1.1 nm. Thus, the hyperspectral data cube consisted of 600 (image) × 960 (image) × 128 (wavelength) pixels. An image, which are integrated over the wavelength range, showed the two fingerprints overlapping each other. Separation of overlapping fingerprints was tried applying principal component analysis, multivariate curve resolution - alternating least squares analysis, and partial least squares analysis to the fluorescence hyperspectral data. Among three methods examined herein, partial least squares analysis was found to be most effective for fingerprint separation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to combat fraud on travel documents (e.g., passports), identity documents (e.g., ID-card) and breeder documents (e.g., birth certificates) to facilitate the travel of bona-fide travelers and to prevent criminal cross-border activities, such as terrorism, illegal migration, smuggling, and human trafficking. However, it is challenging and time consuming to verify all document security features manually. New technologies can assist in the automated fraud detection in these documents, which may result in faster and more consistent checks. This paper presents and evaluates four new technologies in automated document analysis. The first recognizes printing techniques. The second assists in the recognition of fraud in details. The third extracts information from the document, which can be used to detect anomalies at a tactical level. The fourth category concerns the analysis of travel patterns, using information from the visa pages in passports. The performance is assessed for each element with quantitative performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most difficult challenges in counter-terrorism, crime fighting and surveillance mission is to accurately identify people from an image/video footage to catch shortlisted terrorists and criminals. For this purpose, imaging devices used in video surveillance systems are being improved in many aspects such as spatial resolution, frame rate, dynamic range, spectral characteristics in order to achieve better imaging performance for both monitoring and automatic detection/recognition tasks. These development efforts aim to improve the basic imaging characteristics of the device, such as the average amount of brightness/darkness of the video footage, irrespective of high-level semantic elements/knowledge such as presence and locations of monitored objects/individuals in the scene monitored by the camera. Nevertheless, strong local/global illumination variations in harsh environments result in high dynamic range in the scene and thus over-/underexposed regions in video frames. In this study, a cognitive imaging system prototype that will allow higher face recognition performance by taking into account the locations of automatically detected human faces in the scene and the local illumination conditions/dynamic range values is proposed with a developed/produced smart camera. This is achieved by developing a smart Auto-Exposure (AE) algorithm which use only the region of interest (ROI) to compute/measure the brightness of the video frame. The current ROI is selected by intelligently and sequentially traversing among the detected faces to effectively handle whole faces in the scene. The experimental results show that the proposed ”cognitive” camera can achieve a dramatic increase in accuracy performance over the normal camera in the face recognition task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Person re-identification (Re-ID) can be used to find the owner of lost luggage, to find suspects after a terrorist attack, or to fuse multiple sensors. Common state-of-the-art deep-learning technology performs well on a large public dataset but it does not generalize well to other environments, which makes it less suitable for practical applications. In this paper, we present and evaluate a new strategy for rapid Re-ID retraining to increase flexibility for deployment in new environments. In addition, we pay special attention to make our method work with anonymized data due to the sensitive nature of the collected data. A training set with anonymized snippets is automatically collected using additional cameras and person tracking. The evaluation results show that this rapid training approach obtains high performance scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing complexity of security challenges requires Law Enforcement Agencies (LEAs) to have improved analysis capabilities, e.g., with the use of Artificial Intelligence (AI). However, it is challenging to make large enough high-quality training and testing datasets available to the community that is developing AI tools to support LEAs in their daily work. Due to legal and ethical issues, it is often undesirable to share raw data with personal information. These issues can lead to a chicken-egg problem, where annotation/anonymization and development of an AI tool depend on each other. This paper presents a federated tool for semi-automatic anonymization and annotation that facilitates the sharing of AI models and anonymized data without sharing raw data with personal information. The tool uses federated learning to jointly train object detection models to reach higher performance by combining the annotation efforts of multiple organizations. These models are used to assist a person to anonymize or annotate image data more efficiently with human oversight. The results show that our privacy-enhancing federated approach – where only models are shared – is almost as good as a centralized approach with access to all data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, adversarial patches have been used successfully, to fool object detectors by hiding a specific or suppress almost all relevant detections in an image. Although there are various ways to harden against or identify those attacks in the visual spectrum, there is only a small fraction that actually evaluates these mechanisms on thermal infrared input data. Thermal infrared object detectors and classifiers cannot be fooled with pixel optimized adversarial patches, but they are still prone to Gaussian function patches. This paper (I) investigates two methods for hardening real-time infrared object detectors against adversarial patches. One of these methods is our novel (II) APMD, an extension of an already existing adversarial robustness mechanism, that relies on (unsupervised) adversarial training, to clear adversarial patches for deep learning object detectors in the infrared spectrum. We therefore (III) generate adversarial patches, that fool object detectors in the infrared spectrum in three different ways, and evaluate them with real-world data, recorded with the experimental platform MODISSA. Our results show, that the hardened system is fast enough to be used in a real-time environment and successfully detects and inhibits adversarial attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. Compared to tracking with a still fixed camera, the images captured with a PTZ camera are highly dynamic because the vision becomes difficult under some realistic conditions such as fast camera movements, occlusion and similar objects to the tracked target. Also, compensating for these problems is even more complex on edge system. With the increasing availability of small single-board computers with high parallel processing power capabilities, tracking objects using an onboard computer in real time has become feasible. Although these onboard computers allow a wide variety of computer vision methods to be executed, there is still a need to optimize these methods for running time and power consumption. This paper proposes a hybrid application with low CPU consumption for surveillance objects to detect and track at the edge. To detect the target at the beginning and in the case where the track has been lost, we use the deep learning based YOLOv3 model. This model provides one of the best trade-offs between speed and accuracy in the literature. A kernelized correlation filter is used to track the detected object in real-time. Combining these two algorithms provides high accuracy and speed even on onboard computers. Under a real-time streaming condition, the proposed method yields better results than the original KCF in tracking accuracy and outperforms a deep learning-based tracker when a target has a vast movement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The contribution of this paper is an evaluation of the chain of calculations for a behaviour recognition method. That is, from person detection, centralized person tracking to a bi-directional long short-term memory (BiLSTM). The centralized person tracking fuses detections from distributed and multimodal sensors. The BiLSTM learns long-time dependencies in the tracking data sequences. We use experimental sensor data from visual and thermal infrared sensors. The sensor data describe five scenarios with people performing normal and abnormal behaviours. The results indicate that the mean recognition accuracy is rather high. However, with position as the only input data, the robustness of the method is rather low. The robustness increases by adding velocity to the dataset. Velocity adds important information, even though velocity appears very messy when visualized in diagrams. Furthermore, the BiLSTM is compared with the unidirectional long short-term memory (LSTM) and the gated recurrent unit (GRU).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image classification tasks leverage CNN to yield accurate results that supersede their predecessor human-crafted algorithms. Applicable use cases include Autonomous, Face, Medical Imaging, and more. Along with the growing use of AI image classification applications, we see emerging research on the robustness of such models to adversarial attacks, which take advantage of the unique vulnerabilities of the Artificial Intelligence (AI) models to skew their classification results. While not visible to the Human Visual System (HVS), these attacks mislead the algorithms and yield wrong classification results. To be incorporated securely enough in real-world applications, AI-based image classification algorithms require protection that will increase their robustness to adversarial attacks. We propose replacing the commonly used Rectifier Linear Unit (ReLU) Activation Function (AF), which is piecewise linear, with non-linear AF to increase their robustness to adversarial attacks. This approach has been considered in recent research and is motivated by the observation that non-linear AF tends to diminish the effect of adversarial perturbations in the DNN layers. To gain credibility of the approach, we have applied Fast Sign Gradient Method (FGSM), and Hop-Skip- Jump (HSJ) attacks to a trained classification model of the MNIST dataset. We then replaced the AF of the model with non-linear AF (Sigmoid, GeLU, ELU, SeLU, and Tanh). We concluded that while attacks on the original model have a 100% success rate, the attack success rate is dropped by an average of 10% when non-linear AF is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We exploit micro-nano structuration to achieve multifunctional windows offering outstanding optical and fluidic properties to enhance the operation of surveillance or detection devices under rainy conditions. These windows are based on synthesis of an artificial index gradient for antireflection properties and improvement of their water repellency property thanks to their structuration at a subwavelength scale with controlled conical geometries. We demonstrate the realization of multifunctional germanium windows for LWIR camera, using two approaches: nanoimprint lithography, well-known for its very high resolution enabling applications from visible to thermal infrared domain, followed by etching techniques, and 3D direct laser writing based on Two-Photon Polymerization (TPP), which is of interest thanks to its ability to manufacture complex 3D structuration directly. Optical characterization shows the ability of such windows to improve optical transmission within 8-14μm spectral range, as compared to non-structured window. In terms of water repellency, the structured windows enable an increase of the contact angle up to 160° with a very low hysteresis. To evaluate the advantage of the multifunctional windows for imaging devices, the windows are integrated in front of a thermal infrared camera and images analysis shows that the camera sensitivity is increased for the nanoimprint window thanks to the multifunctional window and high water repellency in presence of water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images captured in surveillance systems suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. We present a new image enhancement algorithm based on multi-scale block-rooting processing. The basic idea is to apply the frequency domain image enhancement approach for different image block scales. The parameter of transform coefficient enhancement for every block is driven through optimization of measure of enhancement. The main idea is that enhancing the contrast of an image would create more high-frequency content in the enhanced image than the original image. To test the performance of the proposed algorithm, the public database O-HAZE is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image enhancement refers to processing images to make them more suitable for display or further image analysis. An enhancement procedure improves future automated image-processing steps (detection, segmentation, and recognition) for efficient system decision-making. This paper presents a new method of visual surveillance image enhancement that improves the visual quality of digital images that exhibit dark shadows due to the limited dynamic range of imaging. The proposed method base on 3-D block-rooting multi-scale transform domain technique, comprising: finding similar blocks in the image by block-matching; block-grouping for different block sizes; applying 3-D block-matching parametric image enhancement; calculating the quality measure of enhancement; optimizing parameters of image enhancement method through the quality measure of enhancement; fusing different enhanced images. Experimental results from test data set show that the proposed technique performs well and can improve the quality during the sharpening of the image details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality (AR) applications demand realistic rendering of virtual content in a variety of environments, so they require an accurate description of the 3-D scene. In most case AR system is equipped with Time-of-Flight (ToF) cameras to provide real-time scene depth maps, but they have problems that affect the quality of depth data, which ultimately makes them difficult to use for AR. Such defects appear because of poor lighting, specular or fine-grained surfaces of objects. As a result, the effect of increasing the boundaries of objects appears, and the overlapping of objects makes it impossible to distinguish one object from another. The article presents an approach based on a modified algorithm for searching for similar blocks using the concept of anisotropic gradient. A proposed modified exemplar block-based algorithm uses the autoencoder-learned local image descriptor for image inpainting, that extract the features of images, and the depth image by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. We also show the application for the proposed vision system using depth inpainting for virtual content reconstruction in augmented reality. Analysis of the results of the study shows that the proposed method allows you to correctly restore the boundaries of objects on the image of the depth map. Our system quantitatively outperforms state-of-the-art methods in terms of reconstruction accuracy in the real and simulated benchmark datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.