|
1.IntroductionThe Raspberry Pi Foundation provides low-cost, high-performance single-board Raspberry Pi computers to educate and solve real-world problems. As of early 2016, over 8 million Raspberry Pi’s had been sold, making it one of the most popular single-board computers on the market.1 These small single-board computers are quickly moving from the do-it-yourself, or DIY, community into mainstream technology development. Many are being used to acquire a wide range of measurements and are being incorporated into instruments for a multitude of applications including medical support and e-health2–7 robotics,8 surveillance monitoring,9 and food production optical sorting.10 The advent of open source software (and some hardware) has only quickened this trend. The Raspberry Pi credit-card-sized computer supports several accessories, including a camera module containing the Sony IMX219 sensor. This computer and camera configuration is of particular interest since it can provide raw-data format imagery that can be used for a multitude of applications, including computer vision, biophotonics, medical testing, remote sensing, astronomy, improved image quality, high dynamic range (HDR) imaging, and security monitoring. This paper evaluates the characteristics of the Raspberry Pi V2.1 camera based on the Sony IMX219 sensor and the radiometric performance of its raw-data format imagery, so the system can be effectively used for scientific imaging and engineering purposes. The Raspberry Pi 3 is the third generation single board Raspberry Pi computer and became available to consumers in February 2016. Some of the more significant Raspberry Pi attributes, including interfaces, are described in Table 1. At the time of this writing, a Raspberry Pi 3 sold for about $35 USD and the V2.1 camera module sold for approximately $25 USD.1,11 The Raspberry Pi Foundation provides several operating systems for the Raspberry Pi 3, including Raspbian and a Debian-based Linux distribution, as well as third-party Ubuntu, Windows 10 IOT Core, RISC OS, and specialized distributions for download. Table 1Raspberry Pi 3 computer attributes.
To understand the scientific and engineering potential of these versatile imaging sensors, a comprehensive laboratory-based radiometric characterization was performed on a small number of Raspberry Pi V2.1 camera modules. The camera is based on the Sony IMX219 silicon CMOS back-lit sensor and produces 8 megapixel images that are in size. The IMX219 sensor operates in the visible spectral range (400 to 700 nm) and uses a Bayer array with a BGGR pattern. Sensor specifications are detailed in Table 2.12 The Raspberry Pi also provides a visible and near-infrared version of the Sony IMX219 called the NoIR camera. This camera has No infrared (NoIR) filter on the lens, which allows imaging beyond the visible range. In this paper, the NoIR version was not considered. Table 2Sony IMX219 sensor chip specifications.
The V2 camera module operates at a fixed focal length (3.04 mm) and single -number (F2.0) typically focused from the near-field to infinity. Images can be captured at ISO settings between 100 and 800 in manually set increments of 100 (although not verified above 600 in this investigation) and camera exposure times between and 6 s (although not verified above 1 s in this investigation) using a rolling shutter. Some of the more significant camera specifications are shown in Table 3. In addition to still photos, the Raspberry Pi Sony IMX219 sensor supports a cropped 1080p format at 30 frames per second (fps) and full-frame imaging video at up to 15 fps, but not in raw-data format. The entire camera board is small— and weighing about 3 g. It connects directly to the Raspberry Pi 3 through a 15 pin mobile industry processor interface (MIPI) camera serial interface and is shown alongside a Raspberry Pi 3 in Fig. 1. Table 3Raspberry Pi camera specifications.
2.Radiometric Characterization OverviewSeveral scientific and engineering applications require raw-data format imagery with known and calibrated radiometric properties. A camera’s radiometric characterization typically includes dark frame assessments, linearity, image noise assessments, exposure or electronic shutter stability assessments, flat fielding, spectral response measurements, and an absolute radiometric calibration. Dark frame knowledge and flat fielding improve image quality by correcting for fixed pattern noise (FPN) and other spatial effects such as vignetting. Linearity characterization is essential for scientific and engineering applications. Understanding noise as a function of signal level is important for properly exposing imagery, determining the number of samples required for a particular application, and optimizing denoising algorithms. Spectral response information is used in traditional photographic color balancing13 and for spectroscopy,14,15 remote sensing,16 astronomy,17,18 and many other science and engineering applications.19–21 Absolute calibration relates image acquisition conditions (including illumination and viewing geometry), exposure time, ISO, and pixel digital number (DN) value to spectral radiance. To perform the radiometric characterizations described in this paper, the camera was accessed and controlled with software from within the Python programming language using the PiCamera application programming interface (API). While finer grain control of the camera can be achieved through low level C libraries, such as OpenMax IL, all of the functionality necessary for the activities in this paper is available from the PiCamera API. Raw-data format images were preprocessed on the Raspberry Pi with a Python script utilizing the NumPy library and saved in the NumPy file format. The preprocessed raw images were transferred to a separate computer and read into MATLAB with a NumPy data format reader. All further processing was accomplished using MATLAB. The radiometric characterizations described in this investigation include dark frame assessments at multiple ISO and exposure settings, camera linearity assessments as a function of ISO setting and exposure time, sensor image noise as a function of ISO setting, exposure stability assessments, spectral band specific flat fielding function measurements, camera spectral response measurements, and an absolute radiometric calibration to tie measured camera DN values to NIST-traceable SI radiance units. Further information on the techniques that were utilized is described in Ref. 22. 3.Dark Frame AssessmentA camera dark frame assessment was performed to quantify and correct for the camera’s fixed-pattern noise bias. Camera ISO settings were varied from 100 to 600 in steps of 100 at two different exposure times, 5 and 50 ms. After camera warm-up, defined in this investigation as 200 exposures, groups of 250 images were acquired in a dark room at with black cloth covering the camera aperture for each camera setting. The dark frame statistical properties were analyzed and are shown in Tables 4 and 5. The entire image frame was used in this assessment. As expected, the dark images became noisier with increasing ISO setting. Data taken at the higher exposure setting are also slightly noisier. In all cases, the mean and median values were essentially identical. Table 4Raspberry Pi camera V2 dark frame statistics at 5 ms (250 frame mean).
Table 5Raspberry Pi camera V2 dark frame statistics at 50 ms (250 frame mean).
A 250-frame mean dark image was generated at each ISO setting. Since dark frames are temperature dependent, they were acquired at the same experimental conditions as the bright frames. Histogram plots generated for the mean dark images with the lowest and highest ISO setting further describe the noise variation in ISO and are shown in Fig. 2 for the 50-ms dataset. The ISO 600 histogram is slightly broader, and, while not shown, the tails are significantly longer. A subset of the corresponding 250-frame mean dark images is presented in Fig. 3 to show the fine scale spatial structure. As indicated in the tables, histogram comparisons between the two different ISO settings are nearly identical between imagery acquired at 5 and 50 ms. 4.Camera LinearityRaspberry Pi camera linearity was evaluated as a function of both exposure time and ISO setting. These measurements were obtained by imaging an in-house developed 1.5-m diameter large integrating sphere lamped with Luxeon Rebel 4000 K white-light LED sources mounted on relatively large 40-mm diameter heat sinks to maintain temperature stability.23 In an integrating sphere, light rays from a source (input) are uniformly scattered by highly reflective diffuse inner walls, as shown in Fig. 4, to produce uniform illumination across the camera field of view (placed at the output). The sphere’s spectral radiance was monitored with a NIST-traceable spectrometer using a bare fiber. The LED sources were powered using a stable power supply. LED current was set so that the measured DN value at the center of the image in the green band was of the maximum DN value at the longest exposure time or highest ISO setting depending on the test sequence. Since the product of the light source spectral shape and sensor response peaks in the green spectral region,23 green band pixels have larger DN values than red or blue band pixels. Reducing the exposure time or ISO setting (depending on the test sequence) from this set point enabled the camera to be tested over an extended portion of its dynamic range. For this test, the camera was positioned in front of the sphere, as shown in Fig. 5. Since the focal length of the lens is 3.04 mm, the camera was effectively focused at infinity in this position. In this assessment, the data were normalized using the mean of a region in the center of the image. 4.1.Linearity with Exposure TimeCamera linearity with exposure time was determined at an ISO setting of 100. In this set of measurements, five images were taken at each exposure time setting. The bright images were temporally and spatially averaged to establish a mean DN value within the center region. The raw data were found to be linear with respect to exposure time for the green band, as shown in Fig. 6. Table 6 summarizes the linear fit through the data. In this table and subsequent tables, root mean square error (RMSE) is defined as follows: where is the number of data points and is the residual or difference between the model (a straight line fit in this case) and the measured data at each point.Table 6Linearity with exposure setting linear fit parameters.
4.2.Linearity with ISOCamera linearity with ISO setting was determined at an exposure time setting of 10 ms. In this set of measurements, five images were taken at each ISO setting. As with the previous linearity assessment, the bright images were temporally and spatially averaged within a region in the center of each image to establish a mean DN value. The raw data were found to be linear with respect to ISO setting for the green band raw data, as shown in Fig. 7. Table 7 summarizes the linear fit through the data. Table 7Linearity with ISO setting linear fit parameters.
5.Sensor Image NoiseTotal sensor image noise () can be expressed in terms of photon shot noise (), read noise (), and FPN (), as described in Eq. (2):24 In this investigation, the team used a mean-variance method to characterize noise as a function of signal. This method, which plots pixel variance against the mean signal on a linear plot, yields results that are relatively simple to interpret. A more detailed description of various methods, including the photon transfer method, is described in Ref. 25. Sensor noise characterization is usually performed on single or pairs of frames of data by acquiring imagery within an integrating sphere without a lens or optic in place. The near perfectly uniform illumination field produces a near-uniform mean signal (DN) across the FPA that is independent of position with the exception of FPN. Using this technique, a mean signal and variance are calculated for each frame of data acquired. Sphere illumination (radiance level) is varied to generate means and variances across the dynamic range of the sensor. While some third party camera boards give users the ability to change lenses, the camera module provided by Raspberry Pi, has a fixed (glued) lens not easily removable, in front of the IMX219 sensor, which introduces signal roll-off with field angle (see Sec. 7) This spatially varying roll-off effect prevents one from obtaining a near-uniform mean signal within a single frame. In this investigation, temporal mean signal and pixel variance values were instead determined by analyzing frames of data (all pixels) acquired at a fixed set of conditions (ISO, exposure time, and sphere illumination), as described by Ref. 24. While a large amount of data are needed, this technique removes FPN from the assessment and derives the lowest possible sensor image noise value, comprised solely of shot and read noise. After warm-up, 250 frames of data were acquired at five different illumination levels (including dark frames) spanning the dynamic range of the sensor. Pixel locations were then sampled across the FPA at every 1000 pixels so that each 8 megapixel image produced data points of mixed RGB. Data were acquired at ISO values of 100, 200, and 400 at 5-ms exposure times. The resulting mean-variance plots are shown in Fig. 8. Linear fits were made through the data, as shown in Table 8. As expected, the slope scales with ISO setting. Table 8Mean-variance linear fit parameters.
6.Camera Exposure StabilityAlthough one would expect that after turning the camera on, the electronic shutter should be very stable, the team saw some unexpected variation in camera output and decided to measure camera stability. Raspberry Pi camera exposure stability was tested at an exposure setting of 5 ms and an ISO setting of 100. Frames were acquired every 2.0 s. Illumination to the sphere was set such that the center green pixels measured . Four hundred frames were acquired, spatially averaged, and normalized to the steady state temporal mean (mean of the last 150 data points). These values, shown as a percentage of the steady state temporal mean, are plotted as a function of time in Fig. 9 (dark frames) and Fig. 10 (bright frames). Turning the camera on and taking images can cause changes in output due to sensor warming. The plots show that after approximately a 200 frame warm-up period, data values reach steady state. The data were modeled as a solution to a thermal lump circuit with a step function due to the initiation of acquiring data, as shown below:26 where is a scale factor, is the frame number, is a frame constant (analogous to a time constant), and is an offset constant. A time constant can be calculated by multiplying by the frame rate. The team expects that results will change slightly if the rate at which data are taken is changed. For the dark frame data, the fitted values for , , and are , 53.1, and 100.0, respectively. The data show that dark frames are changing on the order of 0.01%, which is negligible for almost any potential application. For the bright frame data, the fitted values for , , and are 0.309, 58.9, and 99.697, respectively. Although the bright frame transient behavior is small compared to photon noise, the data does show that one should allow the camera to come to equilibrium for some applications.7.Flat FieldingAs part of this investigation, flat fielding surfaces were developed for the Raspberry Pi camera for each demosaicked RGB band. These measurements were acquired at an ISO setting of 100 and an exposure time of 20 ms. To reduce any local integrating sphere surface defects, the sphere was imaged at four different azimuthal positions and three different view angles. To reduce image noise from the flat fielding surface, three images were acquired at each azimuthal/view angle position. Median images were then generated based on these 36 images (4 azimuthal positions × 3 view angle positions × 3 images per position). To eliminate the influence of one band on another, a simple bilinear demosaicking algorithm was used.27 Since lens roll-off is the dominant feature in the flat fielding surface, a new flat fielding surface will need to be acquired each time the lens is changed. The resulting RGB demosaicked flat fielding surfaces are shown as images and three-dimensional surfaces in Figs. 11Fig. 12–13. All surfaces were peak normalized to one. For visualization purposes, the mesh plot (three-dimensional surface) sampling was reduced by displaying the mean value of each block. Diagonal transects were taken across each of the three flat fielding surfaces, from top right to bottom left and from bottom right to top left. These diagonal transects overlay each other showing optical symmetry, as seen in Figs. 14Fig. 15–16. Each figure contains transects through a single image alongside transects through the 36-image median image. The red band transect, while similar, is not identical to the blue and green transects, as shown in Fig. 17. This may be caused by the red band filter attenuating the signal as a function of field angle and warrants additional study. The RGB flat fielding surfaces shown in Figs. 11–13 were fit to a surface using the functional form shown in Eq. (4). While higher order terms were considered within this Fourier series expansion, surface noise began to be fit in addition to the general shape of the surface, and the overall fit did not improve: The coefficients that were obtained when fitting these functions are shown in Table 9. Parameters that measure the goodness of fit are also included in the table. Note that the coefficients for the green and blue bands are nearly identical and are consistent with the curves shown in Fig. 17. Table 9Flat fielding surface functional fit parameters.
8.Spectral ResponseA camera’s spectral response is a measure of how each detector responds to a given input illumination as a function of wavelength. The Raspberry Pi camera’s spectral response was determined by imaging a quartz tungsten halogen lamp filtered using a monochromator, as shown in Fig. 18, and then comparing those measurements to that obtained with a calibrated power meter. Illumination wavelength was varied from 350 to 800 nm. At each wavelength step, the monochromator provided 1.5 to 2.0 nm spectrally wide illumination. Illumination from the monochromator exit slit was centered on the FPA to remove lens roll-off (vignetting) variability from the assessment. The light beam exiting the monochromator was also diffused using a few small sheets of lens paper. The acquired spectral response was peak normalized and is shown in Fig. 19 in arbitrary units. These measured spectral responses are broad and significantly overlap each other. Spectral response measurements of two different Raspberry Pi cameras were taken. These measurements showed very similar results, as displayed in Figs. 20Fig. 21–22. 9.Absolute Radiometric CalibrationAn absolute radiometric calibration was performed on a single Raspberry Pi V2.1 camera, which enables one to convert camera acquired DN values into engineering units of radiance. An absolute radiometric calibration can be used to quantify the brightness of objects in a scene and enables a user to preset and optimize camera parameters, such as exposure time, ISO, and -number, before image acquisition. The absolute radiometric calibration is based on a general radiometric equation for a well behaved (or correctable) pixel at a fixed ISO or gain setting, within a linearly behaved (or correctable) sensor.22,28 Since a pixel’s DN (count) is proportional to the number of signal electrons within a pixel,28 the generalized radiometric equation for a dark frame subtracted image, where the bias has been removed and the electronic gain is unity, can be written as follows: where QSE is the quantum scale equivalence,28 which relates counts to electrons, is the exposure time, is the detector area, is the camera’s -number, is Planck’s constant, is the speed of light, is the wavelength of light, is the spectral radiance, is the optical transmission, and is the quantum efficiency. In this equation, the solid angle is approximated by , which yields an error from the exact expression at F2.0. This error is corrected as part of the calibration process.To simplify the above equation, one can define the camera’s spectral response , which is related to amps per watt, as follows: In many cases, one does not know the exact quantum efficiency or optical transmission of a camera and what is measured (in DN) is actually a signal that is proportional to . If one peak normalizes to unity, the integral of over wavelength is the effective spectral width of the spectral response.29 This allows one to define average spectral radiance as follows:Using a parameter like the QSE,28 which relates the number of electrons to counts, one can rewrite Eq. (5) as follows: The QSE can be defined as follows: where is a pixel’s well capacity in electrons and is the digital count range (1024 for a 10 bit system minus dark frame offset). Usually, QSE is defined for cases where electronic gain is unity. When ISO is used, this assumption is not always kept, but, for simplicity, we have used the ratio of ISO to QSE as a generalization to include electronic gain.To perform an absolute camera calibration, the I2R 1.5 m diameter integrating sphere was illuminated with white-light Luxeon Rebel 4000K LEDs (as before) and imaged by the Raspberry Pi camera nearly simultaneously as a NIST-traceable spectrometer, calibrated to better than 5% absolute accuracy, to measure the sphere’s spectral radiance. When acquiring imagery for the calibration, camera exposure was set to 10 ms and ISO was incrementally set to 300, 400, and 500. As mentioned earlier, current to the LEDs illuminating the 1.5-m sphere was set to maximize camera DN in the green band without causing saturation. Five dark images and 60 bright images (4 azimuthal positions × 3 view angles × 5 images per position) of the sphere were acquired. As with the linearity measurements, these multiple bright images were acquired to reduce local integrating sphere surface defects and image noise. The entire image was used in this assessment. The bright images were dark frame subtracted, flat field corrected, and then temporally and spatially averaged to establish a mean DN value. If we define a calibration coefficient as follows: We can rewrite Eq. (8) above as follows: The calibration coefficient can then be determined for each RGB band such that: Using F2.0, the resulting three-point mean calibration coefficients, determined at three different ISO values, are shown in Table 10. Table 10Raspberry Pi camera V2 absolute radiometric calibration coefficients.
To keep the Raspberry Pi cameras radiometrically calibrated, this type of assessment would have to be performed periodically. The frequency of this calibration would depend on the radiometric accuracy required, camera operation, and operation conditions. 10.ResultsA comprehensive radiometric characterization was performed on the Raspberry Pi V2.1 camera module. The camera was found to be stable over short periods, measured in days, and performance was repeatable between multiple cameras. Camera exposure stability was extremely stable ( variation) after warm-up. Raw-data format DN values were linear with ISO and exposure time over the regions investigated. Flat fielding surfaces were symmetric, indicating that the optical center of the camera was aligned well to the geometric center of the FPA. Without flat fielding corrections, raw-data format image brightness decreased when transecting from the center to the edge of the image. To qualitatively evaluate the overall effect of applying dark frame subtraction, flat fielding and absolute radiometric calibration, a “typical” raw image was acquired at an ISO setting of 100 and an exposure time of 20 ms. The raw-data format image was demosaicked using a simple bilinear algorithm and displayed in RGB, as shown in Fig. 23. The image was then dark frame subtracted and flat fielded using the functions described above (Fig. 24) and finally radiometrically calibrated using the calibration coefficients provided in this paper (Fig. 25). The final image is a radiometrically correct image that can be converted to units of radiance. Note the improvement in color quality and brightness uniformity when all the corrections are applied. 11.ConclusionThe Raspberry Pi V2.1 camera module, operated using the Raspberry Pi 3 single-board computer, has been radiometrically calibrated to produce high quality imagery appropriate for scientific and engineering use. The radiometric calibration coefficients determined in this investigation were applied to imagery acquired with the V2.1 camera module to recover information in SI units of radiance. This finding opens up a wide range of scientific applications associated with computer vision, biophotonics, remote sensing, HDR imaging, and astronomy, to name a few. While the camera modules appeared stable after warm-up over the few month investigation, the camera’s value to the scientific community will be determined in part by longer term stability. The small number of camera modules that were investigated produced consistent, repeatable results. A larger scale investigation involving many more cameras will need to be performed before the community can feel confident that the results of this investigation can be applied to other Raspberry Pi V2.1 camera modules. It should be noted that each camera module will be slightly different, and, for some applications, each individual camera module will have to be characterized. ReferencesE. Upton,
“Raspberry Pi 3 on sale now at $35,”
(2017) https://www.raspberrypi.org/blog/raspberry-pi-3-on-sale January ). 2017). Google Scholar
F. R. P. Bravo, Support System to Help Parkinson’s Patients Read Books, Universidad Complutense de Madrid, Madrid
(2015). Google Scholar
E. Almeida, M. Ferruzca and M. Tlapanco,
“Design of a system for early detection and treatment of depression in elderly case study,”
in Int. Symp. on Pervasive Computing Paradigms for Mental Health,
115
–124
(2014). Google Scholar
W. Yoon et al.,
“6Lo bluetooth low energy for patient-centric healthcare service on the internet of things,”
in Proc. of the Int. Conf. on the Internet of Things,
(2014). Google Scholar
N. M. Zainee, M. Norhayati and K. Chellappan,
“Emergency clinic multi-sensor continuous monitoring prototype using e-health platform,”
in IEEE Conf. on Biomedical Engineering and Sciences (IECBES ‘14),
32
–37
(2014). http://dx.doi.org/10.1109/IECBES.2014.7047512 Google Scholar
S. Fuicu et al.,
“Real time e-health system for continuous care,”
in Proc. of the 8th Int. Conf. on Pervasive Computing Technologies for Healthcare,
436
–439
(2014). Google Scholar
C. Hacks,
“e-health sensor platform V2.0 for Arduino and Raspberry Pi,”
(2017) http://www.cooking-hacks.com/documentation/tutorials/ehealth-biometric-sensor-platform-arduino-raspberry-pi-medical January ). 2017). Google Scholar
L. Chapman, C. Gray, C. Headleand,
“A sense-think-act architecture for low-cost mobile robotics,”
Research and Development in Intelligent Systems XXXII, 405
–410 Springer International Publishing, Switzerland
(2015). Google Scholar
S. Prasad et al.,
“Smart surveillance monitoring system using Raspberry Pi and PIR sensor,”
Int. J. Comput. Sci. Inf. Technol., 5 7107
–7109
(2014). Google Scholar
V. Desai and A. Bavarva,
“Image processing method for embedded optical peanut sorting,”
Int. J. Image Graphics Signal Process., 8
(2), 20
–27
(2016). http://dx.doi.org/10.5815/ijigsp Google Scholar
E. Upton,
“New 8-megapixel camera board on sale at $25,”
(2017) https://www.raspberrypi.org/blog/new-8-megapixel-camera-board-sale-25/ January ). 2017). Google Scholar
Sony, “IMX219 product brief version 1.0,”
(2017) https://www.electronicsdatasheets.com/manufacturers/raspberry-pi/parts/imx219 January ). 2017). Google Scholar
Colorimetry: Understanding the CIE System, John Wiley & Sons, New York
(2007). Google Scholar
H. Yu, Y. Tang and B.T. Cunningham,
“Smartphone fluorescence spectroscopy,”
Anal. Chem., 86
(17), 8805
–8813
(2014). http://dx.doi.org/10.1021/ac502080t ANCHAM 0003-2700 Google Scholar
J. Spigulis, I. Oshina and Z. Rupenheits,
“Smartphone single-snapshot mapping of skin chromophores,”
in Optical Tomography and Spectroscopy,
(2016). Google Scholar
J. Jensen, Remote Sensing of the Environment: an Earth Resource Perspective, Pearson Prentice Hall, Upper Saddle River, New Jersey
(2007). Google Scholar
J. Hoot,
“Photometry with DSLR cameras,”
in Society for Astronomical Sciences Annual Symp.,
67
(2007). Google Scholar
M. Mobberley, Lunar and Planetary Webcam User’s Guide, Springer Science & Business Media, London
(2006). Google Scholar
T. Mizoguchi,
“6 evaluation of image sensors,”
Image Sensors and Signal Processing for Digital Still Cameras, 179
–203 CRC, Boca Raton, Florida
(2006). Google Scholar
J. Jiang et al.,
“What is the space of spectral sensitivity functions for digital color cameras?,”
in IEEE Workshop on Applications of Computer Vision (WACV),
168
–179
(2013). http://dx.doi.org/10.1109/WACV.2013.6475015 Google Scholar
G. J. Verhoeven et al.,
“Spectral characterization of a digital still camera’s NIR modification to enhance archaeological observation,”
IEEE Trans. Geosci. Remote Sens., 47
(10), 3456
–3468
(2009). http://dx.doi.org/10.1109/TGRS.2009.2021431 IGRSD2 0196-2892 Google Scholar
R. Ryan and M. Pagnutti,
“Enhanced absolute and relative radiometric calibration for digital aerial cameras,”
81
–90 2009). Google Scholar
Lumileds, “DS107 LUXEON Rebel PLUS product datasheet 20140930,”
(2017) http://www.lumileds.com/uploads/380/DS107-pdf January ). 2017). Google Scholar
B. C. Jacquot, B. M. Bolla and S. Maguire,
“Hybrid approach to mean-variance and photon transfer measurement,”
Proc. SPIE, 9481 94810D
(2015). http://dx.doi.org/10.1117/12.2176115 PSISDG 0277-786X Google Scholar
J. R. Janesick, Photon Transfer, SPIE Press, Bellingham, Washington
(2007). Google Scholar
J. Holman, Heat Transfer, 9th ed.McGraw-Hill, New York, Boston
(2002). Google Scholar
Jr. R. A. Maschal et al.,
“Review of Bayer pattern color filter array (CFA) demosaicing with new quality assessment algorithms,”
(2010). Google Scholar
R. D. Fiete, Modeling the Imaging Chain of Digital Cameras, SPIE Press, Bellingham, Washington
(2010). Google Scholar
J. R. Schott, Remote Sensing: the Image Chain Approach, Oxford University Press, New York
(2007). Google Scholar
BiographyMary Pagnutti is president and cofounder of Innovative Imaging and Research. She received her BE and ME degrees in mechanical engineering from Stony Brook University and has worked in the field of remote sensing and sensor calibration for over 18 years. Robert E. Ryan is vice president and cofounder of Innovative Imaging and Research. He received his BS in physics from Hofstra University, his MS in electrophysics from Polytechnic Institute of New York, and his PhD in physics from Stony Brook University. He has worked in the field of remote sensing and sensors for over 30 years. George Cazenavette was a 2016 summer intern at Innovative Imaging and Research. He is currently attending Louisiana Tech University in Ruston, Louisiana studying cyber engineering and computer science. Maxwell Gold was a 2016 summer intern at Innovative Imaging and Research. He is currently attending Washington and Lee University in Lexington, Virginia studying mathematics. Ryan Harlan is an imaging intern at Innovative Imaging and Research. He received a BS in chemical engineering from Washington University in Saint Louis, Missouri. |