When watching stereoscopic three-dimensional (S3D) display, most people will feel visual discomfort due to the color asymmetry of left and right eyes. However, a stereo pair consisting of one gray image and one color image can be perceived by a human observer as a 3D color scene, and there is no depth perception degradation and only limited color degradation. This novel presentation approach of stereoscopic display can reduce the redundancy of color information, optimize the video compressions and greatly save the transmission bandwidth. And it may also alleviate the visual comfort problem of asymmetric color stereoscopic content by reducing the visual system load. In this paper, visual comfort was evaluated for stereoscopic videos with different gray-color allocation schemes. Three allocation schemes are used to investigate the changes of visual comfort of stereoscopic video. The subjective evaluation results show that different binocular color allocation schemes have an impact on the visual comfort assessment (VCA) scores. Among them, the binocular color coding of the left half of all frames in stereoscopic video are color and the right half of all frames are gray, which may reduce the amount of color information processing by the visual brain and therefor has a more comfortable visual experience. We suggested that the color allocation scheme also can reduce the flicker of video, and the visual comfort of stereoscopic contents with only half color information is We demonstrated that this color allocation scheme also can reduce the flicker of video, and the visual comfort of stereoscopic contents with only half color information is within the acceptable range.
Accurate forecast of solar irradiance is significant for related domains. Because accurate forecasts can help relevant researchers plan the management and application of solar energy that can be used in nuclear power stations and power plant. In this paper, an approach of solar irradiance forecasting based on artificial neural network (ANN) is adopted. The dataset from April 1st, 2017 to May 31st, 2018 was measured by the meteorological station in Yunnan Normal University. Multilayer perceptron model (MLP) and the variables, such as daily solar irradiance, air humidity, and relevant time parameter are employed to forecast solar irradiance in future 24 h. Moreover, the method of cross-validation is used to guarantee the robustness of experimental results. The results show the normalized root means square error (nRMSE) between the measured data and forecasted data is about 1.8~20.07% (1.8~10.6% for the sunny day, 11.6~20.07% for the cloudy day). Compared with ANN model, the nRMSE on the model of K-Nearest Neighbor (KNN), Linear Regression (LR), Ridge Regression (RR), Lasso Regression, Auto-Regressive and Moving Average (ARMA) and Decision Tree Regression (DTR), are 35%, 31%, 30%, 26%, 23% and 11% (unstable) respectively. It means that the performance of our model satisfies related applications.
Visible and near-infrared spectral reflectances of surface vegetation are basic data for applications in remote sensing classification, multispectral imaging and color reproduction. Leaves are the objects of this study. Firstly, The 400-700 nm visible light spectral reflectance and 700−1000 nm near infrared spectral reflectance data of 12 kinds of trees such as camphor tree, ginkgo tree and peach tree (etc.) are measured by visible and near-infrared portable hyperspectral cameras. The spectral reflectance data is obtained by denoising the using the Minimum Noise Fraction (MNF). Secondly, the Principal Component Analysis (PCA) is used as a method of processing spectral reflectance in the visible and near infrared bands. At last, the correlation analysis is used for spectral reflectance in the visible and near-infrared bands. The obtained data and results provide a theoretical basis for the subsequent establishment of a spectral reflectance data base of surface vegetation spectroscopy and multispectral imaging.
Image registration has always been the hot topic in image research field, and the mutual information registration method has become a commonly used method in image registration because of its high precision and good robustness. Unfortunately, it has a problem for infrared and visible image registration. Lots of rich background detail information is usually provided by the visible light band, while the infrared image can locate an object (heat source) with a higher temperature, and often can't obtain the background information. The large difference in the background information of the two images not only interferes with the accuracy of the registration algorithm but also brings a lot of computation. In this paper, a method of fuzzy c-means clustering is used to separate foreground and background which reduces the background information interference for registration, based on the feature that the infrared image and the visible image have a high uniformity in the target area and a large difference in the background area. Then, the mutual information of the foreground image marked by clustering algorithm is calculated as the similarity measure to achieve the purpose of registration. Finally, the algorithm is tested by the infrared and visible images acquired actually. The results show that the two image’s registration is perfectly implemented and verify the effectiveness of this method.
KEYWORDS: 3D displays, Visualization, Eye, Stereo holograms, Color vision, Lutetium, Autostereoscopic displays, Information science, Information technology, Brain
Stereoacuity, or named stereoscopic acuity, is the minimum disparity that can be perceived by someone with two eyes and normal brain functions. It is extremely relevant to human stereopsis and considerable individual variability. Due to the contribution of color information to stereopsis is controversial, this study is focused on designing and conducting a stereoacuity test for different colors. In particular, the effect of color variations on stereoacuity was evaluated by using 3D displays to present color random-dot stereogram (RDS) stimuli. Seventeen color points sampled from the CIELAB color space were selected for the test. All sample color points are averagely distributed in red-green and yellow-blue directions at isoluminance. The stimuli had the same dot density of 50% and black background, with different colors and disparities. Then the minimum disparity was obtained as the stereoacuity of subject. The results of experiment show that the stereoacuities are not significantly different in red-green direction and blue-yellow directions. These results support the view that color does not contribute to the stereoacuity.
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
KEYWORDS: Image fusion, Near infrared, RGB color model, Denoising, Detection and tracking algorithms, Visible radiation, Image analysis, Image processing, Color imaging, Algorithm development
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
KEYWORDS: Visualization, 3D displays, Eye, Color difference, Color vision, Image fusion, CRTs, Stereoscopic displays, RGB color model, Information technology
Color asymmetry is a common phenomenon in stereoscopic display system, which can cause visual fatigue or visual discomfort. When the color difference between the left and right eyes exceeds a threshold value, named binocular color fusion limit, color rivalry is said to occur. The most important information brought by stereoscopic displays is the depth perception produced by the disparity. As the stereo pair stimuli are presented separately to both eyes with disparities and those two monocular stimuli differ in color but share an iso-luminance polarity, it is possible for stereopsis and color rivalry to coexist. In this paper, we conducted an experiment to measure the color fusion limit for different disparity levels. In particular, it examines how the magnitude and sign of disparity affect the binocular color fusion limit that yields a fused, stable stereoscopic percept. The binocular color fusion limit was measured at five levels of disparities: 0, ±60, ±120 arc minutes for a sample color point which was selected from the 1976 CIE u'v' chromaticity diagram. The experimental results showed that fusion limit for the sample point varied with the level and sign of disparity. It was an interesting result that the fusion limit increased as the disparity decreases at crossed disparity direction (sign −), but there is almost no big change at uncrossed disparity direction (sign +). We found that color fusion was more difficult to achieve at the crossed disparity direction than at the uncrossed disparity direction.
Color asymmetry is a common phenomenon in 3D displays, which can cause serious visual discomfort. To ensure safe and comfortable stereo viewing, the color difference between the left and right eyes should not exceed a threshold value, named comfortable color difference limit (CCDL). In this paper, we have experimentally measured the CCDL for five sample color points which were selected from the 1976 CIE u'v' chromaticity diagram. By human observers viewing brief presentations of color asymmetry image pairs, a psychophysical experiment is conducted. As the color asymmetry image pairs, left and right circular patches are horizontally adjusted on image pixels with five levels of disparities: 0, ±60, ±120 arc minutes, along six color directions. The experimental results showed that CCDLs for each sample point varied with the level of disparity and color direction. The minimum of CCDL is 0.019Δu'v' , and the maximum of CCDL is 0.133 Δu'v'. The database collected in this study might help 3D system design and 3D content creation.
In order to solve the problems that image’s entropy of information decline obviously and boundary line phenomenon appear obviously in the processing of aerial remote sensing image’s mosaic, an image mosaic approach is presented in this paper, which uses wavelet image fusion based on structure similarity and is capable of creating seamless mosaics in real-time. The approach consists of three steps. First, the overlapping area of two aerial images is extracted. Then, the two overlapping area images are fused adaptively by the method of multi-layer wavelet decomposition based on the structure similarity and appointed regulation. Finally, weighted average fusion is used again to avoid the visible boundary line for the both sides of the boundary of the above fusion image. Experimental results show the entropy of information, sharpness and standard deviation have been improved significantly, and the boundary line has been eliminated observably.
KEYWORDS: RGB color model, Cameras, Digital cameras, Image processing, Statistical modeling, Light sources and illumination, Digital imaging, Digital image processing, Environmental sensing, Matrices
The digital camera has become a requisite for people’s life, also essential in imaging applications, and it is important to
get more accurate colors with digital camera. The colorimetric characterization of digital camera is the basis of image
copy and color management process. One of the traditional methods for deriving a colorimetric mapping between camera
RGB signals and the tristimulus values CIEXYZ is to use polynomial modeling with 3×11 polynomial transfer
matrices. In this paper, an improved polynomial modeling is presented, in which the normalized luminance replaces the
camera inherent RGB values in the traditional polynomial modeling. The improved modeling can be described by a two
stage model. The first stage, relationship between the camera RGB values and normalized luminance with six gray
patches in the X-rite ColorChecker 24-color card was described as "Gamma", camera RGB values were converted into
normalized luminance using Gamma. The second stage, the traditional polynomial modeling was improved to the
colorimetric mapping between normalized luminance and the CIEXYZ. Meanwhile, this method was used under daylight
lighting environment, the users can not measure the CIEXYZ of the color target char using professional instruments, but
they can accomplish the task of the colorimetric characterization of digital camera. The experimental results show that:
(1) the proposed method for the colorimetric characterization of digital camera performs better than traditional
polynomial modeling; (2) it’s a feasible approach to handle the color characteristics using this method under daylight
environment without professional instruments, the result can satisfy for request of simple application.
Illumination estimation is the main step in color constancy processing, also an important prerequisite for digital color image reproduction and many computer vision applications. In this paper, a method for estimating illuminant spectrum is investigated using a digital color camera and a color chart under the situation when the spectral reflectance of the chart is known. The method is based on measuring CIEXYZ of the chart using the camera. The first step of the method is to gain camera′s color correction matrix and gamma values by taking a photo of the chart under a standard illuminant. The second step is to take a photo of the chart under an estimated illuminant, and the camera′s inherent RGB values are converted to the standard sRGB values and further converted to CIEXYZ of the chart. Based on measured CIEXYZ and known spectral reflectance of the chart, the spectral power distribution (SPD) of the illuminant is estimated using the Wiener estimation and smoothing estimation. To evaluate the performance of the method quantitatively, the goodnessfitting coefficient (GFC) was used to measure the spectral match and the CIELAB color difference metric was used to evaluate the color match between color patches under the estimated and actual SPDs. The simulated experiment was carried to estimate CIE standard illuminant D50 and C using X-rite ColorChecker 24-color chart, the actual experiment was carried to estimate daylight and illuminant A using two consumergrade cameras and the chart, and the experiment results verified feasible of the investigated method.
KEYWORDS: Colorimetry, Color difference, Modulation, Spatial frequencies, Human vision and color perception, CRTs, Color vision, Calibration, Contrast sensitivity, Visualization
Purpose: In this paper the chromatic contrast was defined as color difference CIEDE2000, the sensitivity was defined as reciprocal of threshold of the color difference, and the CSFs called color difference sensitivity functions were measured. Methods: The CSFs of 4 subjects were measured for nine spatial frequencies(0.28, 0.56, 1.00, 1.97, 2.95, 4.72, 6.74, 11.80 and 15.74cpd) gratings of mean luminance of 40cd/m2 on a CRT display. Measurements were made for gratings whose average color was a chromatically neutral point(a*=0 and b*=0) and also for modulations around four chromatic points along the color directions a* and b* in the CIELAB color space. Results. The thresholds of color difference are from 0.74 to 6.67 in the range of experimental frequencies. The color difference sensitivity functions are similar with known results that CSF curves for the two chromatic directions are consistently low-pass irrespective of the average color of the stimulus. The sensitivity to gratings for b* direction is identical as for a* direction below spatial frequency 4.72cpd, however, the sensitivity for b* direction is smaller than the one for a* direction above spatial frequency 4.72cpd, which indicates that the CIEDE2000 threshold for grating with lower frequencies(i.e., small color differences) is not related to the chromatic direction and chromatic point of modulation, however, the threshold is related to the chromatic direction and chromatic point of modulation for large color differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.