In recent years, intelligent driving navigation and security monitoring have made considerable progress with the help of deep Convolutional Neural Networks (CNNs). As one of the state-of-the-art perception approaches, semantic segmentation unifies distinct detection tasks widely desired by both autonomous driving and security monitoring. Currently, semantic segmentation shows remarkable efficiency and reliability in standard scenarios such as daytime scenes with favorable illumination conditions. However, in face of adverse conditions such as the nighttime, semantic segmentation loses its accuracy significantly. One of the main causes of the problem is the lack of sufficient annotated segmentation datasets of nighttime scenes. In this paper, we propose a framework to alleviate the accuracy decline when semantic segmentation is taken to adverse conditions by using Generative Adversarial Networks (GANs). To bridge the daytime and nighttime image domains, we made key observation that compared to datasets in adverse conditions, there are considerable amount of segmentation datasets in standard conditions such as BDD and our collected ZJU datasets. Our GAN-based nighttime semantic segmentation framework includes two methods. In the first method, GANs were used to translate nighttime images to the daytime, thus semantic segmentation can be performed using robust models already trained on daytime datasets. In another method, we use GANs to translate different ratio of daytime images in the dataset to the nighttime but still with their labels. In this sense, synthetic nighttime segmentation datasets can be generated to yield models prepared to operate at nighttime conditions robustly. In our experiment, the later method significantly boosts the performance at the nighttime evidenced by quantitative results using Intersection over Union (IoU) and Pixel Accuracy (Acc). We show that the performance varies with respect to the proportion of synthetic nighttime images in the dataset, where the sweet spot corresponds to most robust performance across the day and night. The proposed framework not only makes contribution to the optimization of visual perception in intelligent vehicles, but also can be applied to diverse navigational assistance systems.
In order to study the property and performance of LED as RGB primary color light sources on color mixture in visual psychophysical experiments, and to find out the difference between LED light source and traditional light source, a visual color matching experiment system based on LED light sources as RGB primary colors has been built. By simulating traditional experiment of metameric color matching in CIE 1931 RGB color system, it can be used for visual color matching experiments to obtain a set of the spectral tristimulus values which we often call color-matching functions (CMFs). This system consists of three parts: a monochromatic light part using blazed grating, a light mixing part where the summation of 3 LED illuminations are to be visually matched with a monochromatic illumination, and a visual observation part. The three narrow band LEDs used have dominant wavelengths of 640 nm (red), 522 nm (green) and 458 nm (blue) respectively and their intensities can be controlled independently. After the calibration of wavelength and luminance of LED sources with a spectrophotometer, a series of visual color matching experiments have been carried out by 5 observers. The results are compared with those from CIE 1931 RGB color system, and have been used to compute an average locus for the spectral colors in the color triangle, with white at the center. It has been shown that the use of LED is feasible and has the advantages of easy control, good stability and low cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.