Open Access
5 March 2020 Calibration algorithms for polarization filter array camera: survey and evaluation
Author Affiliations +
Abstract

A polarization filter array (PFA) camera is an imaging device capable of analyzing the polarization state of light in a snapshot manner. These cameras exhibit spatial variations, i.e., nonuniformity, in their response due to optical imperfections introduced during the nanofabrication process. Calibration is done by computational imaging algorithms to correct the data for radiometric and polarimetric errors. We reviewed existing calibration methods and applied them using a practical optical acquisition setup and a commercially available PFA camera. The goal of the evaluation is first to compare which algorithm performs better with regard to polarization error and then to investigate both the influence of the dynamic range and number of polarization angle stimuli of the training data. To our knowledge, this has not been done in previous work.

1.

Introduction

The electric field that describes electromagnetic radiation can be considered as a vector, whose direction of oscillation is perpendicular to the direction of the wave. This geometrical description of waves is known as polarization.1 Reptiles or birds, for example, are capable of perceiving polarized information, whereas humans, like any other mammals, are only sensitive to two properties of light: intensity and color. The analysis of the polarization scattered by a scene is known as polarimetric imaging; it yields a complementary performance to classical intensity imaging.2 A mathematical tool named Stokes vector is used to describe practically and efficiently the modification of the polarization states when the light travels and interacts with different materials. Polarization analysis using polarimeter instruments becomes increasingly popular in imaging applications, such as classification of materials,3 3D inspection and reconstruction,4 image dehazing,5 etc.

In general, polarimeters can be categorized into scanning or snapshot devices, none of which is exempt of tradeoffs and optical imperfections. Scanning polarimeters include the division-of-time (DoT) polarimeter, whereas snapshot polarimeters include the division-of-amplitude, division-of-aperture, and division-of-focal-plane (DoFP).6,7 Despite the fact that each approach is arguably suitable for different applications, DoFP is the ideal choice for real-time imaging as it is capable of analyzing light polarization within one sensor integration period, which avoids motion artifacts introduced by DoT polarimeters. These devices lead to a lower cost, simpler design, and higher compactness than other snapshot devices.

The DoFP polarimeters rely on the cell-to-cell coupling of a polarization filter array (PFA) to the imaging system’s focal-plane array. PFA technology is a derivative of the filter array imaging principle8,9 and was first patented in 1995,10 but most of the practical implementations and technology advances have been made since 2009. Manufacturing processes are different,11 but commercial sensors (like the SONY IMX250 MZR sensor) tend to have a standardized spatial arrangement with a repeating pattern of four linear polarizers with orientation axes of 0, 45, 90, and 135 deg12 (see Fig. 1). About the spatial arrangement, every quadrant is located diagonally to its orthogonal counterpart, but other arrangements exist.13 This filter array imaging suffers from sparsity, i.e., each pixel senses only one polarization channel. It introduces instantaneous field-of-view errors14 when reconstructing 2D polarization scene information from sparse data. Some evolved interpolation methods, like what was done before for the color and spectral domains, have emerged to compensate for these drawbacks.11 The acquisition of intensities through the four linear polarizers makes it possible to estimate the first three Stokes vector components of the input light. To sense circular polarization, i.e., the fourth Stokes component, additional optical elements must be combined with the PFA. Practical implementation of full-Stokes PFA instruments is at the very early stages. In this paper, we will only consider the linear analysis of polarization.

Fig. 1

The camera design considered in this paper is a polarization filter assembly over a monochrome sensor. The sensor matrix is composed of photodiodes and each polarization filter covers one sensor pixel.

JEI_29_4_041011_f001.png

Intrinsically, a silicon sensor (CMOS or CCD) has several sources of errors. Dark current, readout, or salt-and-pepper noises are typical examples. They are corrected using a particular noise pattern, where pixels are susceptible to giving brighter intensities than others when illuminated with homogeneous light. PFA cameras exhibit the noises inherent to silicon sensors, but additional noises due to manufacturing optical imperfections of filters are introduced. Thus, each polarizer has its own optical characteristics, i.e., transmission, diattenuation, and polarization analysis orientation.15 It results in a pattern noise that could lead to a spatial variation of digital values up to 20% over the whole sensing area.16 Spatial calibration procedure is thus necessary to compensate for these nonuniformities.17,18 We believe that this step is crucial for numerous applications. The uncalibrated camera values could lead to false contrasts or large errors when computer vision algorithms are applied, e.g., material inspection, shape from polarization, index of refraction retrieval, or illuminant direction estimation.

In this paper, we will introduce the global PFA imaging model in Sec. 2, before reviewing calibration procedures in Sec. 3. After defining an experimental setup, we characterize and apply several existing calibration methods on raw data in Sec. 4. Then, we investigate more deeply a calibration procedure by evaluating independently two acquisition criteria in Sec. 5. Conclusion is provided in Sec. 6.

2.

PFA Imaging Model

In this section, we enunciate the polarization measurement model coming with PFA cameras. Stokes formalism includes a suitable depiction of polarization states of the light. Four components, arranged in a vector, fully describe the polarization states of light. Since a PFA polarimeter is intrinsically a linear polarization analyzer composed by linear diattenuators, the input Stokes vector of light Sin that reaches the camera can be simplified as Sin=[S0S1S20]t. Mueller matrix M, which is a 4×4 matrix, describes the alteration of polarization characteristics of a material element, it can be seen as a transfer function of Stokes vectors. A Stokes vector is linearly transformed, such as Sout=MSin.

A photodetector array transduces luminous intensity into a camera response Ii, where i is the spatial index over pixels. Thus, only the top row of the Mueller matrix M is useful to know, which is also called the pixel’s analysis vector Ai=[a0,ia1,ia2,i0]. Thus, if all the first rows of the corresponding Mueller matrices in the sensing area are known, the errors due to nonideal characteristics can be mitigated through calibration. In other words, to carry out precise measurements, the filters do not need to be ideal.

We could now define the imaging model that transforms the incoming Stokes vector into a per-pixel sensed value as follows:

Eq. (1)

Ii=AiSin.

A PFA camera has four different polarization angles of analysis, arranged in a 2×2 pattern. We could then assume a unique spatial position for the four adjacent polarizers, and group the four analysis vectors into an array of vectors (the measurement matrix) that we call Wj, where j indexes the superpixel spatial position. Equation (1) could be extended to this specific configuration:

Eq. (2)

Ij=[Ij,0Ij,45Ij,90Ij,135]=WjSin=[Aj,0°Aj,45°Aj,90°Aj,135°]Sin.

Considering Eq. (2), the input Stokes vector can be estimated from intensities using the pseudo-inverse of Wj, assuming that Wj is known. In most cases, Wj matrices are not given by PFA manufacturers and should be estimated during a calibration step. We will see in the next section how this model is investigated or evenly extended in the literature to perform calibration of PFA camera.

3.

Calibration Techniques

3.1.

Assumptions and General Procedure

Most of the techniques presented in this paper have several important assumptions about the sensor, the calibration optical setup, and the statistical behavior of the signal. One assumes that:

  • The sensor operates in the linear regime. Most of the calibration techniques also assume that there is no deviation in the cross-talk effect when coupling the PFA with the focal-plane array, due to either its position relatively to the micro-lens array (below or above), or regarding the sensor orientation with the incident light. Moreover, it supposes that the lens configuration is unique and valid only for a given calibration procedure, i.e., f-number and focal length. Any changes in these parameters afterward will deteriorate the calibration result.

  • There is no spectral dependency on W, i.e., retardance is flat over the range of wavelengths considered, and the reference polarizer used to generate training Stokes vectors is perfect (no diattenuation, retardance, or transmission problems19).

  • The acquisition during calibration is not corrupted by temporal noise (a mix of Poisson and Gaussian noises) and that there is no need to apply flat-field procedure to correct training data for residual illuminant spatial deviation.

All the calibration techniques for PFA that we will present share the same global procedure:

  • 1. Inputting a set of light stimuli (training data) with known Stokes vectors to the polarimeter;

  • 2. For each input Stokes stimuli, capture a series of images and average/add them;

  • 3. Estimate the A vectors or W matrices, i.e., the polarization properties at each pixel or superpixel, by solving an inverse problem;

  • 4. Compute the gains and offsets from step 3, and apply them to correct raw values.

3.2.

Single-Pixel Calibration

Single-pixel calibration is the way to calibrate each pixel independently, without considering the polarization properties of its neighborhood. Powell and Gruev16 added the offset noise di in the model in Eq. (1), to take into account for additive noise during calibration:

Eq. (3)

Ii=AiSin+di.
Then, a calibration function is applied to pixels as follows:

Eq. (4)

Ii=gi(Iidi),
where Ii is the corrected value and gi=AidealAi is the normalized gain, where Aideal and Ai vectors are assumed to be colinear. As stated in Powell and Gruev,16 this assumption implies that only transmission errors can be compensated, the single-pixel calibration does not correct for diattenuation and orientation variations (rotational offset) across the PFA structure. Thus, the single-pixel method yields errors when calculating the angle and degree of polarization.

3.3.

Superpixel Calibration

Superpixel calibration is a more evolved technique. It uses a relative neighborhood of four pixels to form the superpixel used in the calibration framework; it compensates for deviations in transmission, diattenuation, and orientation. Angle of linear polarization (AOLP) and degree of linear polarization (DOLP) are much more precise with this technique since an in-common pixel correction is done instead of the individual.

Myhre et al.20 use Eq. (2) to calibrate their full-Stokes polarimeter. They assume no additive noise in their model. On the contrary, Powell and Gruev16 include additive noise in their superpixel calibration procedure:

Eq. (5)

Ij=WjSin+[dj,0  degdj,45  degdj,90  degdj,135  deg].
As for the single pixel, a calibration function is applied as follows:

Eq. (6)

Ij=gj(Ijdj),
where gj=WidealWj+ gathers the four normalized gains recovered by pseudoinverse operation over Wj.

They found that the superpixel calibration method reduces reconstruction error, in terms of RMSE for DOLP and AOLP, by a factor of around 10 compared to the single-pixel calibration method along with correction of diattenuation and orientation.

3.4.

Adjacent Superpixel Calibration

Chen et al.21 performed a complementary strategy to the superpixel calibration by adding a computational step at the end the superpixel algorithm. Once Eq. (6) is applied, every single pixel is recalibrated as a function of the weighted average of the four overlapped superpixels neighborhood at the pixel position:

Eq. (7)

Ij=1/4[Ij+Ijleft+Ijdiag+Ijabove],
where the terms of average are the calibrated values of the superpixels located to the left, above, and diagonal (adjacent to the latter two), respectively. Authors do not give explanation or justification about doing this average. They argue that the error between the acquired DOLP and the ideal value is reduced in range by a factor of 10. In the same way, the error for the AOLP is reduced in range by a factor of 4. They also show visually more uniformity in their reconstructed DOLP and AOLP parameters. When applied to a real image, they show that the calibration improves polarization feature contrasts in both intensity and DOLP images. No comparison with the state of the art is done.

3.5.

Average Analysis Matrix Calibration

Zhang et al.22 implement the same sequence as Chen et al.21 with the same neighborhood, i.e., by calculating the analysis matrix and the offsets from training data. But instead of using an ideal analysis matrix to produce the multiplicative adjustment of the calibration function, an all-encompassing PFA average matrix is used as the common factor in Eq. (6) such that gj=WmeanWj+. No analytic criteria are stated for using the mean matrix but this value is closed to the ideal. They apply the calibration to a real image and show qualitative results; the edges are smoother compared to the superpixel method. Quantitatively, they found a thousandth order of magnitude reduction of the RMS error between the mean DOLP and the pixel’s DOLP. A quantitative comparison with the superpixel calibration16 is done using real images, but the authors do not specify how the mean DOLP of an object under study is sampled out of the image.

4.

Characterization and Method Comparison

4.1.

Acquisition Setup

We use similar optical setup as in Ref. 16 to recover the training and test data. The adjustable shutter is substituted by the camera’s integration time to modulate the input signal. The light source is a tungsten–halogen lamp and is provided by an Intralux 4000 module. The light passes through a Thorlabs IS200 Ø2″ integrating sphere to generate nominally an assumed uniformed and unpolarized light. A 10LP-VIS-B linear polarizer from NewPort is considered as the reference polarizer, and it is rotated by a rotational stage to generate the reference input Stokes vectors. The studied PFA camera is assembled by 4D Technology and employs a Sony IMX174 CMOS sensor coupled with a PFA manufactured by the Moxtek company. Photographic objective’s focal length equals 12.5 mm and the f-number was set at f/1.4.

With this setup, a group of acquisitions is done:

  • Six different intensities, namely, 100%, 50%, 25%, 10%, 5% and 2% (in a way that the 100% maximum intensity equals 75% of the saturation level of the camera) are considered;

  • At each intensity, 36 input polarization angles ranging from 0 deg to 175 deg with a step of 5 deg are generated;

  • Each couple of intensity/angle acquisition are averaged over 100 images to generate the final data.

In order to maximize the uniformity of the images, a region of interest is selected so that 300×300  pixels are considered at the center of the sensor area.

We make our acquisition database available for further research as supplementary material.

4.2.

PFA Characterization

We first characterize the PFA by doing an estimation of the analysis vector parameters Ai=[a0,ia1,ia2,i0] and the offset noise di, at each pixel i. A least-squares solver involving N=54 instances captured from the acquisition setup described previously (nine equally spaced input polarization angles along with six intensities) is performed for one pixel using

Eq. (8)

(Aidi)=(Ii,1Ii,N)(Sin,1Sin,N11)+,
where “+” means the pseudoinverse, and Sin is the generated reference input Stokes vector, assumed to be uniform across the region of interest.

Optical polarization properties, namely the filter’s orientation angle α, the diattenuation D, and the extinction ratio X, are then derived from the estimated analysis vector Ai using Eq. (9):

Eq. (9)

α=0.5arctan(a2,ia1,i);D=a1,i[a0,icos(2α)];X=1+D1D.

The polarization parameters nonidealities can be quantitatively depicted by their mean value along with their statistical dispersion in the corresponding orientation axes (0 deg, 45 deg, 90 deg, and 135 deg). This makes the assessment possible to see how far they are from their ideal values.

Table 1 summarizes the mean values and the standard deviation across the characterized sensor area. Results show that most of the polarization orientation axis fluctuate and are displaced from their theoretical values. Diattenuation/extinction ratio also exhibit relatively high spatial variations that could lead to imprecision in the polarization measurement if no correction is applied to raw data.

Table 1

Mean values and standard deviation (square brackets) of the estimated polarimetric properties: orientation angle α, diattenuation D, and extinction ratio X. These values are estimated over a region of interest of 300×300  pixels at the center of the sensor area.

PolarizationUncalibrated
ParameterMean [Std]
α0 (deg)1.06 [0.53]
α45 (deg)48.60 [0.85]
α90 (deg)88.38 [0.60]
α135 (deg)137.49 [0.65]
D00.839 [0.015]
D450.756 [0.016]
D900.816 [0.016]
D1350.815 [0.015]
X011.51 [1.172]
X457.23 [0.526]
X909.93 [0.955]
X1359.91 [0.962]

4.3.

Calibration Method Comparison

We have implemented the four calibration methods presented in Sec. 3. The test images are acquired when impinging the camera with 20 deg oriented fully linearly polarized light at 100%, 50%, and 5% intensity. One can characterize the performance over the dynamic range (DR) by means of two metrics, namely the root-mean-square error (RMSE) and the peak signal-to-noise ratio (PSNR). Results are presented in Table 2.

Table 2

Mean RMSE/mean PSNR in terms of the Stokes vector components S0, S1, and S2 for the uncalibrated and calibrated responses of a 20 deg oriented fully polarized light capture in a region of 300×300  pixels.

UncalibratedSingle-pixel16Superpixel16Adjacent superpixel21Average matrix22
100% Light intensity
S00.1203/18.390.094/20.530.0005/65.680.0004/67.930.1183/18.53
S10.1147/18.800.113/18.940.0025/52.020.0024/52.250.1102/19.15
S20.1132/18.920.1411/17.010.001/61.230.001/63.880.1082/19.31
50% Light intensity
S00.1277/17.880.1011/19.910.0064/43.830.0064/43.860.1257/18.07
S10.1057/19.520.1036/19.700.0088/41.070.0088/41.130.1006/19.95
S20.1053/19.560.133/17.520.0102/39.810.0102/39.860.0996/20.03
5% Light intensity
S00.1231/18.200.0965/20.310.0033/49.540.0026/51.670.1209/18.35
S10.1093/19.230.1073/19.3910.006/44.4010.0048/46.390.1043/19.63
S20.1118/19.030.1395/17.1110.0048/46.330.0031/50.080.1064/19.45

The results show that the single-pixel calibration method16 reduces the error in the S0 component in an average of 22% but fails to correct S1 and S2, which is in accordance with the inability of the model to correct orientation errors α, as stated by the authors. Conversely, the superpixel calibration method16 reduces the RMSE in an average of 95% over the three Stokes components. Adjacent superpixel method21 yields a 10,000th order of magnitude reduction of the RMSE with respect to the superpixel calibration method. However, the adjacent superpixel method implies an interpolation across the superpixel neighborhood. Here, the tested image is uniform, so the interpolation step slightly improves the results acting like a smoothing filter. With a real image containing edges in the polarimetric information, the quality of the correction should decrease. For average analysis matrix method,22 it can be seen a reduction of the RMSE in an average of only 4% over the three Stokes components. This might be due to the replacement of an ideal analysis matrix by an uncalibrated-mean analysis matrix, which could carry the errors in the parameters.

Superpixel calibration method is a well-established method. The basis concept of superpixel is a starting point for the other methods. Derived methods do not necessarily have significantly better results.

To illustrate the superpixel method performance, Fig. 2 shows pseudocolor images representing the DOLP and AOLP before and after the superpixel calibration applied to the test image with input polarization light at 0 deg and 100% intensity. When the image is uncalibrated, the DOLP has a mean value significantly lower than 1. The color scale allows to intuitively visualize the heterogeneity of the values in the flat field. The mean AOLP is improved by a shifting near the ideal value and its standard deviation is improved by a factor of 25.

Fig. 2

Zero-oriented fully polarized light capture and parameter estimations. The region of interest is an area of 300×300  pixels in the center of the sensor.

JEI_29_4_041011_f002.png

5.

Calibration Setup Optimization

Several previous works investigated different criteria impacting the polarization measurement during the characterization or calibration, such as the f-number and focal length influence,20 the sensor orientation regarding the incident light,23 or the effect of temporal noise.19

We believe that optimizing the number of acquisitions during calibration is crucial as the camera has to be recalibrated several times as the f-number or the focal length is changed over time. It appears that a wide variety of calibration methods employ calibration setups that seem to be oversized or not well optimized relatively to these two parameters. This investigation could be useful to make calibration setup lighter in terms of the number of acquisitions.

Here, we propose to investigate the effect of varying the training data configuration over the calibration pipeline. We individually test two criteria, which have not been investigated previously to our knowledge: the DR of input data and the amount of polarization angles used to train the calibration algorithm.

We apply our evaluation on the superpixel calibration from Powell and Gruev16 as it is the most generic and in-common starting method employed in all the previously reviewed methods.

5.1.

Impact of Data Dynamic Range

It is stated in the state of the art that analyzing polarization in a scene with a camera could exhibit pixel regions that sense low irradiance or high polarization degree.24 Thus, one can envisage using several images with different DRs of intensities during characterization and calibration.

To see how DR of training images impacts the calibration result, we propose here to characterize and train the model using six scenarios regarding the DR magnitude: all intensity images combined, only 100%, combination of 100%|50%, 100%|50%|25%, 100%|50%|25%|10%, and 100%|50%|25%|10%|5% intensity levels. We use nine equally spaced input polarization angles for each scenario in order not to be affected by the angle selection criteria, which will be tested in the next subsection in this evaluation step. Test of the calibration is done using only one image at one specific angle (10 deg), which is not in the training sets, and with six different intensities.

Table 3 examines the PSNR reconstruction of the Stokes vector components after applying the calibration over the scenarios. It can be seen that the error has different trends over the three components so it is not obvious that using a high DR increases the performance of the calibration function. Different analyses could be done by looking at the results:

  • The three Stokes parameters are not affected similarly by the DR enhancement of training data, the S0 component is the most affected.

  • Using a great variety of DR increases the global performance by looking at the mean values over the tested images. By evaluating an equally weighted average over the three components, the 2-DR image scenario gives the highest PSNR, whereas the standard deviation is inversely proportional to the number of realizations. Intermediate DR images in the set insignificantly increase the calibration performance.

  • If dealing with polarization signatures that are mostly close to the noise level of the sensor, it is preferable to include low DR in the training set.

Table 3

Results of the PSNR calculated over the S0, S1, and S2 parameters for each scenario. The DR effect of training (row) and testing (column) data is evaluated through these tables. A single angle of 10 deg is used as a test image. Best values are highlighted in bold and worst values in italics.

(a) S0
100%50%25%10%5%2%Mean[Std]
100%54.7540.7651.4646.6245.2028.0844.48[9.40]
100%|50%60.7346.7457.5752.6251.2034.0550.48[9.42]
100%|50%|25%57.0247.9558.6151.3150.8134.3450.01[8.66]
100%|50%|25%|10%55.0848.8258.2350.4550.3934.5249.58[8.17]
100%|50%|25%|10%|5%54.3749.2157.8550.1050.2034.6049.39[7.95]
All DR50.4352.5553.7847.7348.4535.1448.01[6.71]
(b) S1
100%50%25%10%5%2%Mean[Std]
100%55.3641.4644.1148.5146.4833.5344.91[7.30]
100%|50%48.6344.0047.5651.2047.8734.4145.61[5.96]
100%|50%|25%48.0544.3848.1051.3347.9134.5245.71[5.91]
100%|50%|25%|10%48.0344.3948.1351.3647.9334.5345.73[5.91]
100%|50%|25%|10%|5%48.0144.4048.1551.3747.9334.5345.73[5.91]
All DR46.7945.3549.5051.2847.8634.8045.93[5.83]
(c) S2
100%50%25%10%5%2%Mean[Std]
100%55.9047.7152.4250.7947.1339.1648.85[5.73]
100%|50%59.4949.3654.2249.9546.5639.6049.86[6.76]
100%|50%|25%60.4849.9354.6949.6646.3739.7550.15[7.07]
100%|50%|25%|10%60.7950.1154.8549.5946.3239.7950.24[7.18]
100%|50%|25%|10%|5%60.9150.1754.9249.5746.3139.8050.28[7.21]
All DR61.6351.9055.5548.6945.7440.1550.61[7.54]

To summarize, adding more than just one DR image to the training setup will enhance globally and significantly the result of calibration which seems intuitive. Using only two different DR seems to be enough to increase significantly the results compared to training with only 100% intensity images, adding more is not judicious.

5.2.

Impact of Training Angle Selection

To evaluate the importance of the number of input polarization angle acquisitions made for the characterization, we propose to apply the superpixel calibration over a set of images captured when the camera is illuminated with uniform polarized light at 100% of intensity and by varying both the number of polarization angles for training and the polarization angle for test. To this end, we first train and characterize the analysis vectors using six scenarios: 36, 18, 9, 5, 4, and 3 equally distributed angle images at six different intensities. The latter setup involved the solved Eq. (8), which is under-determined as the number of instances is not equal to the number of unknowns. Thus, the said scenario is possible assuming that the dark offsets are small compared to the DR of the camera (12-bit intensity). Dark offset could also be determined experimentally.25

Once the scenarios are trained, we apply the correction on several single-test images corresponding to 36 different input polarization angles (from 0 deg to 175 deg with a step of 5 deg) at 100% of intensity. Then, we measure the errors relative to the Stokes reference input light for each scenario and tested angles.

Table 4 examines the PSNR in terms of the Stokes vector parameters, compiling the mean values and standard deviation (square brackets) evaluated in the above scenarios. The means have different levels in the vector components and they have a peak in the nine angles setup in both S0 and S2 components. However, it can be seen that the standard deviation is also higher at this point which counteracts its ranking. Conversely, the use of four angles yields only 2% less in S0 but with a lowest standard deviation, along with the better values in S1 and the lowest standard deviation in S2. Figure 3 breaks down the said PSNR behavior when sweeping from 0 deg to 175 deg. It can be seen that the signal level is not uniform with respect to the input polarization angle, nevertheless, the fluctuations roughly keep the same shape among scenarios. In this respect, it can be seen that there is a tradeoff between this simplification and the output error when cutting down the amount of training measurements.

Table 4

PSNR mean values and standard deviation (square brackets) in terms of S0, S1, and S2 parameters over testing data ranging from 0 deg to 175 deg with a step of 5 deg as a function of training data calculated out of different number of angles (36, 18, 9, 5, 4 and 3 angles) and six intensities levels. Best values are highlighted in bold and worst values in italics.

Training DataS0S1S2
Mean [Std]Mean [Std]Mean [Std]
36 angles57.46 [5.36]54.35 [5.59]55.85 [6.10]
18 angles57.91 [5.47]54.44 [5.04]55.67 [6.06]
9 angles58.69 [5.48]54.49 [5.04]56.28 [5.35]
5 angles57.83 [4.41]54.05 [4.79]54.78 [5.13]
4 angles57.66 [4.28]55.03 [3.97]56.12 [4.40]
3 angles55.72 [5.09]52.62 [5.56]53.55 [5.59]

Fig. 3

PSNR of Stokes parameters after calibration as a function of the number of angles used to calculate the training data. Subfigures (a), (b), and (c) depict the signal level in the parameters S0, S1, and S2, respectively. Each color represents a different setup and the radius corresponds to the PSNR. Test images are swept from 0 deg to 175 deg with a step of 5 deg.

JEI_29_4_041011_f003.png

Hagen et al.26 discussed a simpler setup to characterize each pixel individually by using measurements in four different angles instead of 54 measurements (nine angles and six intensity levels) as the training data used in the above standalone evaluation. They put forward a single-pixel calibration approach with simpler calibration setup than prior calibration methods.16,21,22,27 They use only four measurements to characterize each pixel individually, and recover the incident intensity, the orientation axis of analysis, and the diattenuation parameters. Under this approach, a motorized rotational stage is no longer needed as an angle-graduated mount is a practical solution. This choice is in accordance with our results as we found that it is not necessary to use more than four input angles for the calibration.

By applying the same criteria as in the calibration method comparison in Sec. 4.3, we have verified that characterizing the analysis vectors with four angles with one intensity delivered a 93% RMSE reduction averaged over the three Stokes components respect to the uncalibrated data. Comparatively, the nine angles and six intensity level scenario yields a relatively small improvement (0.67% in RMSE reduction). The difference is attributed to the fact that the latter option considers more than one DR.

6.

Discussion and Conclusion

In this paper, we reviewed existing calibration algorithms and evaluated them using a practical implementation applied to a commercially available monochrome PFA camera. The camera’s polarization optical parameters were characterized and methods were applied on data using a uniform linearly polarized light. The results obtained show that the primitive superpixel method performs well and that any other methods that are derived from it bring no significant enhancement. We discussed the calibration setup optimization considering the impact of data DR and the impact of training polarization angle selection. This study considers separately the influence of aforementioned two parameters on the signal-to-noise ratio of the Stokes vector components. In this way, it can help arrange the optimal setup when calibrating a PFA camera for a specific application. Our results show that using more than four angle realizations does not significantly improve the PSNR in the Stokes vector components. But we also demonstrate that using two different DR images improves significantly the calibration compared to using one as in Hagen et al.26 To summarize four angles and two DR realization is a good compromise to obtain a good PSNR while simplifying the calibration setup described in Powell and Gruev.16

Future works would be to define a complete PFA camera pipeline, including high DR enhancement based on multiple exposure times.28 Other PFA sensors came on the market, like the IMX250 MYR from Sony that captures both color and polarization information. In the presence of low and high polarization signatures in the same scene, high DR could correct for saturation and/or nonuniformity of camera sensitivities among all spectral and polarization channels. Finally, the study of polarimetric parameters calibration in multispectral polarimeters, which, in turn, require spectral calibration, is missing in the literature.

Acknowledgments

This work was supported by the ANR JCJC SPIASI project, grant ANR-18-CE10-0005 of the French Agence Nationale de la Recherche.

References

1. 

R. P. Feynman, R. B. Leighton and M. Sands, “The Feynman lectures on physics: Vol. I,” Am. J. Phys., 33 (9), 750 –752 (1965). https://doi.org/10.1119/1.1972241 Google Scholar

2. 

F. Goudail and J. S. Tyo, “When is polarimetric imaging preferable to intensity imaging for target detection?,” J. Opt. Soc. Am. A, 28 46 –53 (2011). https://doi.org/10.1364/JOSAA.28.000046 JOAOD6 0740-3232 Google Scholar

3. 

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng., 47 (12), 123201 (2008). https://doi.org/10.1117/1.3041770 Google Scholar

4. 

M. Ferraton, C. Stolz and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express, 17 21077 –21082 (2009). https://doi.org/10.1364/OE.17.021077 OPEXFF 1094-4087 Google Scholar

5. 

Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt., 42 511 –525 (2003). https://doi.org/10.1364/AO.42.000511 APOPAI 0003-6935 Google Scholar

6. 

P.-J. Lapray et al., “An FPGA-based pipeline for micropolarizer array imaging,” Int. J. Circuit Theory Appl., 46 (9), 1675 –1689 (2018). https://doi.org/10.1002/cta.2477 ICTACV 0098-9886 Google Scholar

7. 

J. S. Tyo et al., “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt., 45 5453 –5469 (2006). https://doi.org/10.1364/AO.45.005453 APOPAI 0003-6935 Google Scholar

8. 

P.-J. Lapray et al., “Multispectral filter arrays: recent advances and practical implementation,” Sensors, 14 (11), 21626 –21659 (2014). https://doi.org/10.3390/s141121626 SNSRES 0746-9462 Google Scholar

9. 

P.-J. Lapray, J.-B. Thomas and P. Gouton, “A multispectral acquisition system using MSFAS,” in Color and Imaging Conf., 97 –102 (2014). Google Scholar

10. 

D. M. Rust, “Integrated dual imaging detector,” US Patent 5, 438, 414 (1995).

11. 

S. Mihoubi, P.-J. Lapray and L. Bigué, “Survey of demosaicking methods for polarization filter array images,” Sensors, 18 (11), 3688 (2018). https://doi.org/10.3390/s18113688 SNSRES 0746-9462 Google Scholar

12. 

J. S. Tyo, “Optimum linear combination strategy for an N-channel polarization-sensitive imaging or vision system,” J. Opt. Soc. Am. A, 15 359 –366 (1998). https://doi.org/10.1364/JOSAA.15.000359 JOAOD6 0740-3232 Google Scholar

13. 

D. A. LeMaster and K. Hirakawa, “Improved microgrid arrangement for integrated imaging polarimeters,” Opt. Lett., 39 1811 –1814 (2014). https://doi.org/10.1364/OL.39.001811 OPLEDP 0146-9592 Google Scholar

14. 

B. M. Ratliff, C. F. LaCasse and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express, 17 9112 –9125 (2009). https://doi.org/10.1364/OE.17.009112 OPEXFF 1094-4087 Google Scholar

15. 

M. Bass et al., Handbook of Optics, 2 McGraw-Hill, New York (1995). Google Scholar

16. 

S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express, 21 21039 –21055 (2013). https://doi.org/10.1364/OE.21.021039 OPEXFF 1094-4087 Google Scholar

17. 

Y. Giménez et al., “Calibration for polarization filter array cameras: recent advances,” Proc. SPIE, 11172 1117216 (2019). https://doi.org/10.1117/12.2521752 PSISDG 0277-786X Google Scholar

18. 

E. P. Wibowo et al., “An improved calibration technique for polarization images,” IEEE Access, 7 28651 –28662 (2019). https://doi.org/10.1109/ACCESS.2019.2900538 Google Scholar

19. 

S. Roussel, M. Boffety and F. Goudail, “Polarimetric precision of micropolarizer grid-based camera in the presence of additive and poisson shot noise,” Opt. Express, 26 29968 –29982 (2018). https://doi.org/10.1364/OE.26.029968 OPEXFF 1094-4087 Google Scholar

20. 

G. Myhre et al., “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express, 20 27393 –27409 (2012). https://doi.org/10.1364/OE.20.027393 OPEXFF 1094-4087 Google Scholar

21. 

Z. Chen, X. Wang and R. Liang, “Calibration method of microgrid polarimeters with image interpolation,” Appl. Opt., 54 995 –1001 (2015). https://doi.org/10.1364/AO.54.000995 APOPAI 0003-6935 Google Scholar

22. 

J. Zhang et al., “Non-uniformity correction for division of focal plane polarimeters with a calibration method,” Appl. Opt., 55 7236 –7240 (2016). https://doi.org/10.1364/AO.55.007236 APOPAI 0003-6935 Google Scholar

23. 

T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt., 51 5392 –5400 (2012). https://doi.org/10.1364/AO.51.005392 APOPAI 0003-6935 Google Scholar

24. 

J. S. Tyo, B. M. Ratliff and A. S. Alenin, “Adapting the HSV polarization-color mapping for regions with low irradiance and high polarization,” Opt. Lett., 41 4759 –4762 (2016). https://doi.org/10.1364/OL.41.004759 OPLEDP 0146-9592 Google Scholar

25. 

E. M. V. Associationet al., “Standard for characterization of image sensors and cameras,” (2010). Google Scholar

26. 

N. Hagen, S. Shibata and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras (Erratum),” Opt. Eng., 58 (8), 089801 (2019). https://doi.org/10.1117/1.OE.58.8.089801 Google Scholar

27. 

G. Han et al., “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors, 17 (11), 2623 (2017). https://doi.org/10.3390/s17112623 SNSRES 0746-9462 Google Scholar

28. 

P.-J. Lapray, J.-B. Thomas and P. Gouton, “High dynamic range spectral imaging pipeline for multispectral filter array cameras,” Sensors, 17 (6), 1281 (2017). https://doi.org/10.3390/s17061281 SNSRES 0746-9462 Google Scholar

Biography

Yilbert Giménez received his BS degree in telecommunications engineering from the UNEFA and his MS degree in nanoscale engineering from the Ecole centrale de Lyon. He has been a PhD candidate at the University of Upper Alsace since October 2018 under the direction of Laurent Bigué, Pierre-Jean Lapray, and Alban Foulonneau. His doctoral research investigates polarimetric imaging in polarization filter array cameras.

Laurent Bigué received his engineering degree from Université de Strasbourg in 1992 and his PhD in optical and electrical engineering fromUniversité de Haute Alsace (UHA) in 1996. He was appointed as an assistant professor at UHA in 1998. He has been a full professor since 2005 and has been serving as the dean of ENSISA (ECE Department of UHA) since 2012. His major interest includes polarimetry. He is a member of SFO, EOS, OSA, and SPIE.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yilbert Gimenez, Pierre-Jean Lapray, Alban Foulonneau, and Laurent Bigué "Calibration algorithms for polarization filter array camera: survey and evaluation," Journal of Electronic Imaging 29(4), 041011 (5 March 2020). https://doi.org/10.1117/1.JEI.29.4.041011
Received: 8 October 2019; Accepted: 18 February 2020; Published: 5 March 2020
Lens.org Logo
CITATIONS
Cited by 11 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Calibration

Polarization

Cameras

Optical filters

Polarimetry

Sensors

Image filtering

Back to Top