|
|
1.INTRODUCTIONThe Norwegian mainland has over 25,000 km of coastline. The economy relies heavily on this resource for exports (petroleum, fisheries), food (fish farms), transport, and tourism. It is also a particularly dynamic region with its proximity to the harsh conditions of the arctic and vulnerability to climate change, especially at higher latitudes. Substantial efforts and investments are made nationwide to better understand, protect, and utilize this valuable resource. Considering its expansive size, it is difficult to monitor this coastline in practice. Traditional means of measurement and data collection include the use of ships, stationary buoys, local ground stations, aircraft, and more recently, autonomous vehicles - observing under, on, and over the sea surface. These can be costly to operate and maintain. In order to capture the full scale of the region more quickly, satellites can provide new opportunities. Traditional satellites can observe the entire coastline in a matter of several passes, and then revisit those same sites day after day (orbit dependent). This is advantageous when looking for larger scale phenomenon or patterns such as algal blooms. At the Centre for Autonomous Marine Operations and Systems (AMOS) at the Norwegian University of Science and Technology (NTNU), the goal is to observe targets such as these to provide timely and detailed information to interested stakeholders. This information can in turn aid fish farmers that rely on coastal waters for a productive stock, for example. Standard imagers can often identify algal blooms using just three wavelength channels, the colors being also visible to the human eye. However, for monitoring ocean color phenomenon, the International Ocean-Colour Coordinating Group (IOCCG) recommends that sensors should observe within a spectral range of roughly 400 – 800 nm (plus possible NIR bands for atmospheric correction) and with maximum a spectral resolution, or bandwidth, ranging from 10 – 50 nm depending on wavelength.1 This additional spectral signature can give insight to composition, and potential severity, of phenomena in the upper water column. While many of the past sensors have been multispectral, huge advances have been made in processing, data storage, and downlinking capability, which now enable us to look more realistically at the possibilities of hyperspectral data.2–4 Full-scale hyperspectral missions have proven this spectral advantage for various applications.5,6 These projects are multidisciplinary and rely on decades of collaborative work. As a university, we typically do not have decades of time to spend on satellite development with quick personnel turnover and budgets coming from shorter term grants. Cubesats, or small satellites based on a standard structure, have proved to be an excellent platform for technology development, rapid testing, and short-term engineering solutions.7 These small-scale technologies can push industry to better meet emerging needs of the scientific community and beyond. However, with significantly smaller platforms comes challenges such as power budgets, downlinking, and even physical space. Standard satellites can be thousands of kilograms, while cubesats are generally less than ten.7 In recent years there has been a push to develop these miniature hyperspectral imagers for cubesat and unmanned aerial vehicle (UAV) applications. Conceptual designs vary and development is a mix of academic and industry-based, see Table 1. Our proposed imager began as a handheld hyperspectral instrument, then was modified for use on an UAV, and has now evolved to meet algal bloom monitoring requirements for the HYPerspectral Smallsat for Ocean observation (HYPSO-1) mission at NTNU. Table 1.A comparison of selected miniature hyperspectral imagers. *Note: only three wavelengths are recorded per image but the imager can be programmed to wavelengths within the range stated.
This paper presents the optical design of our proposed imager. We have selected commercial off-the-shelf (COTS) components when possible to reduce instrument cost and to shorten the development schedule. Special design configurations are explained in order to prepare the COTS components for space, as many are not explicitly designed for that use. Finally we show some brief trade-offs with available COTS components to illustrate possible mission customization using our design concept. 2.OPTICAL DESIGNThe hyperspectral imager (HSI) presented in this paper is optimized for performance within the given constraints of a specific mission, but leaves room for customization through component selection. It is based on a legacy of hyperspectral imager designs primarily developed for handheld and UAV operations, detailed in Sigernes et. al.15–17 The optical concept remains the same, but component choice and design has been adapted for space-based observation following recommendations from the cubesat community.18,19 The design intentionally prioritizes the use of readily available COTS components, mostly from Thorlabs, Edmund Optics, and IDS Imaging Development Systems GmbH. The HSI presented is a transmissive grating imager meaning that wavelength bands are acquired by dispersion of light through a grating.20 Light enters a series of lenses (an objective) L0 that focuses the light to a very thin slit S. The slit permits only a thin line of the light to pass through to the next objective L1. This objective collimates the slit of light that reaches the grating G. The grating separates out the light, much like a prism, and the separate wavelengths pass through a final objective L2 at a refraction angle of β. This objective focuses the light onto a complementary metal oxide semiconductor (CMOS) image sensor detector D where photons are converted to digital units. Internal software then handles the processing of these raw measurements in preparation for downlink. An optical diagram of the concept is shown in Figure 1. In this system, the grating efficiency and quantum efficiency of the detector are key for exploiting the spectrum of observed targets. The first prototype of the HSI, Figure 2 (left), is made from almost entirely COTS components (the detector 3 mount and grating holder are 3D printed). Each of these components can be selected with trade-offs. For example, a greater slit width allows for more light to enter the system, positively affecting the signal-to-noise ratio (SNR), but consequently reduces spatial resolution. Many factors are also wavelength dependent. While this preliminary design meets imaging requirements, it is not designed to withstand operation in space. It has been used extensively for testing and algorithm development in the laboratory. The setup shown in Figure 2 (right) demonstrates one method of collecting images directly from your desktop. The result of one image from this transmissive grating instrument is called a spectrogram. A spectrogram is unique in that the image you see is not a spatial image; it has a spatial dimension (vertical) and a spectral dimension (horizontal), as shown in Figure 3 (left). Spatially, it is just one strip of an image. Only when combined with adjacent strips, do these spectrograms form an actual image (with an additional spectral dimension) of the target region. This mapping then becomes useful for imaging large spatial features, such as algal blooms. One way of sequentially collecting spatial strips is using what is known as the pushbroom technique: the imaging slit is oriented perpendicular to the the flight direction and collects spectrograms as the platform moves forward over the target.20 This is the intended use of the HSI presented. Additionally, the spectrogram must be calibrated. As shown in Figure 3 (left), pixel values only show the photon intensity response. Intensity needs to be converted to standard parameters for comparison. This is generally done by imaging known targets or light sources to match the peaks and dips in the pixel spectra. The intensity counts can also be mapped to radiance or reflectance values upon radiometric calibration with a known emitting light source. A calibrated spectra of the center pixel of the spectrogram is shown in Figure 3 (right). In this way, hyperspectral data can be used to identify targets that emit, absorb, or reflect characteristic optical signatures across a broad range of wavelengths. One way to demonstrate this is by scanning across a target region with the HSI, aligning the individual spatial strips, and selecting three wavelength bands to represent the red-green-blue (RGB) channels visible to the human eye. Figure 4 is an example scan taken with the HSI setup shown in Figure 2 (right). Instead of the typical pushbroom technique, the imager is mounted on a motion control platform and rotated through the scene at a speed synchronized with the imager frame rate. Merged together, the spatial strips form the scene and the spectral dimension is visualized with the three bands noted. In Figure 4 it is already clear to see the different information that can be obtained when more wavelength bands can be utilized. And these images are only RGB representations of the full visible, near-infrared (VIS-NIR) hyperspectral signal recorded in the scan. 3.HYPSO-1 6U OCEAN COLOR MISSIONThe HYPerspectral Smallsat for Ocean observation (HYPSO-1) mission is the first cubesat mission of the Small-Sat Laboratory at NTNU. The goal of the mission is to study ocean color along the Norwegian coast in an effort to better understand the effects of climate change and human impact. This is especially important for the fragile ecosystems that fish farm stakeholders rely on. The satellite is a collaboration between NanoAvionics Corp. and NTNU — the former supplying the satellite bus, electonic power supply, flight computer, and attitude determination and control system; the later providing the hyperspectral imaging payload. Together, the systems fill a 6U cubesat that is scheduled for launch in December 2021. Furthermore, the vision of a constellation of remote-sensing focused CubeSats will constitute a space-asset platform added to the multi-agent architecture of UAVs, USVs, AUVs and buoys that have similar ocean characterization objectives within the NTNU realm.21 3.1Component SelectionTheoretical calculations of the optical design provide benchmark numbers, but physical constraints inside a satellite and the availability and/or the machine-ability of integrated components also contributes to the final component selection. This section outlines candidate components selected within these bounds. Figure 5 shows an exploded view of the imager, highlighting individual components and their placement within the design. The design is flexible and allows the possibility of fine-tuning based on mission requirements. The total volume of the instrument is roughly 220 × 65 × 65 mm3. The total mass is approximately 1.3 kg (excluding additional electronics stack, not shown). 3.1.1Lens ObjectivesAs illustrated in the optical diagram, Figure 1, the design consists of three lens objectives. There are several options available by COTS manufacturers, but the Edmund Optics C-series VIS-NIR objectives were selected for this imager. These objectives are optimized for imaging at target wavelengths in the visible to near-infrared range as advised for ocean color missions. They can be mounted to the optical train via standard C-mount threading, enabling simple replacement when testings. The objectives have a maximum focal length of 50 mm to set the focus at infinity and an adjustable aperture. The objectives in the optical train are identical except for the aperture settings: L0 and L1 are set at F/2.8 and L2 at F/2. Fine focus can be adjusted with the use of spacer rings, but a custom slit tube design will help with careful dimensioning. For calculations that follow, the total transmission of the three lens objectives is assumed to be conservative at T0 · T1 · T2 ≈ 0.5. Here it is assumed that each of the three lens systems has a transmission of 80% throughout in the VIS-NIR wavelength range.27 In addition, it is recommended to conduct a sensitivity calibration28 to verify these numbers for each individual imager. 3.1.2Slit and Slit TubeThe slit is a round disk with a precision cut down the center. It is available from COTS manufacturers in a limited variety of widths and lengths. The selected slit for this imager was ordered from ThorLabs with a width of w = 50 μm and a custom height of h = 7 mm. This slit width will result in a spectral bandpass (FWHM) of approximately 3.33 nm. The slit is located between the front lens, L0, and the collimator lens, L1. It is secured in a machined lens tube that is dimensioned to ensure the precise focal lengths between objectives. The slit tube also has standard C-mount threading for ease in assembly and is anodized to limit stray light within the optical train. More details on the slit tube can be found in Section 3.2.1. 3.1.3Grating and CassetteAfter passing through the front optics, the slit of light meets the transmission grating. Again, many options are commercially available when selecting a grating. The chosen grating is 25 mm2 with 300 grooves/mm and a blaze angle of 17.5° from Edmund Optics. Its efficiency, shown in Figure 6, is nearly above 50% for all wavelengths in the range 400 – 800 nm. As the grating refracts the light through, the light exits at an angle of 17.5° (the blaze angle). Thus, it is critical that the remaining optical components in the light ray are aligned. This is accomplished through a custom machined grating cassette. The cassette not only allows the objectives to be mounted at correct angles relative to the grating surface, but it also blocks stray light from entering the system and provides structural stability for holding the grating in place during launch and operation. 3.1.4Detector and HousingThe detector is an industrial camera with a complementary metal oxide semiconductor (CMOS) image sensor (Sony IMX249) from IDS Imaging Development Systems GmbH.26 It was selected for its high sensitivity in the 400 – 600 nm range, see the quantum efficiency plotted in Figure 6. The sensor is 1/1.2” and has a pixel resolution of 1936 × 1216 pixels. This specific detector is a board level design that requires custom housing — although it is mounted to the objective L2 via the provided standard C-mount. A two-piece housing was machined from aluminum and anodized to enclose the detector boards. 3.2Special Considerations for Space MissionsThe proposed optical components are COTS and were intended for commercial use. The components are designed and manufactured to perform within the environments and constraints experienced here on Earth. This means that images are expected to be taken from a fixed (or stabilized e.g. with a gimbal) platform, in reasonable temperatures, and it is assumed that components can be cleaned and adjusted based on user needs. When these same COTS components are sent to space, everything must be re-evaluated since the environment and transportation counter those basic Earth-based assumptions. All modifications noted in the following sections have one main goal, to achieve the best image quality within constraints. 3.2.1Shock, Resonance, and VibrationsImage quality strongly depends on a fixed optical train and knowledge of the sensor response through calibration.29,30 It is difficult to account for any unknown effects of component shift or rotation after the imager has been calibrated. Since launch can induce extreme vibrations and shock, it is very important to limit the number of parts and to securely fix any components that could rattle loose. The imager designed is based on a primary structural platform (Payload Platform in Figure 7) with the optical COTS components integrated to it. This platform is manufactured from one piece of aluminum to increase rigidity. Objectives are seated in a central groove and are secured with bracket clamps. The grating is inserted in a custom cassette and secured to the platform with screws and space-grade epoxy. Fixing the optical train is one challenge, but individual components also need consideration — especially the lens objectives and the slit assembly. The three COTS objectives have two adjustable settings each: the aperture (f/#) and focal length. From the optical design in Section 2, it follows that both of these variables must be fixed for set operation in low Earth orbit (LEO). The adjustable aperture contains eight small aperture leaves that rotate diagonally into each other in order to form the desired aperture opening size, Figure 8 (center). The camera objective, L2, needs an aperture opening of 12.5 mm in diameter, which is easily obtained by simply removing the aperture leaves (no aperture = 12.5 mm opening). The front two objectives, L0 and L1, are designed for an aperture opening of 8.9 mm diameter. This is achieved with an equally thin donut-shaped part that replaces the aperture leaves, Figure 8 (right). The donut-shaped aperture was cut with a water jet cutter and has an inner diameter of 8.9 mm and an outer diameter of 21.5 mm to fit securely inside the objective. In this way the aperture opening cannot be rattled into a different size or shape with expected vibrations. The focal length of each objective is set by twisting the front end the of the outer lens tube. This adjusts the internal distance between individual lenses, in effect setting the focus. This system normally relies on heavily greased threading for motion. To secure this system, the grease must first be removed through cleaning processes. Then the objective is set to focus on a distant target simulating infinite focus, > 50 m away, such as out the window. When the focus is verified, space-grade epoxy is applied to the threading. Original set screws are used, also with added epoxy. The COTS slit is shown in Figure 9 (center). It is mounted in a disk with a freely rotating friction ring. Space-grade epoxy is added to the friction ring to keep it fixed within the disk. The alignment of the slit to the horizontal of the field of view (FoV) is very critical to spectrogram quality. Any slight rotation will affect spectral resolution and cause distortions in the image. Additionally, the distance from the slit to the objective on either side also determines, to some extent, the focus of the image. A custom tube, Figure 9 (left), was designed to set these distances and to secure the disk from rotation. The distances are built into the outer flanges of the tube length, but the anti-rotation solution is a combination of machine-precision alignment and set screws. The slit is placed on the inner flange of the slit tube and rotated until it aligns vertically with the outer top and bottom planes of the tube shown in the image. This is done with a microscope mounted to a digital drill press. Holes are then drilled on either side of the slit through both the slit disk and the inner flange of the tube. Set screws are added and a retaining ring is twisted over the screws for extra security, Figure 9 (right). When placed in the groove of the imager platform, the slit aligns vertically to the platform base. Additionally, knowing exactly the time and orientation at which an image was acquired is key for processing those images. To ensure this, global navigation satellite system (GNSS) has been incorporated with image time stamping in software. An inertial measurement unit (IMU) and star tracker (see Figure 10) are rigidly attached to the imager platform. In this way, the imager and its positioning do not move independently of one another. The platform is then attached to the satellite bus frame via space-grade dampers to reduce high frequency vibrations and resonance and to stabilize the imager. The combination of the positioning and timing systems with the imager itself enables image processing to be linked to real targets on Earth. It is also worth noting that all electronics boards are conformal coated. This thin, transparent film primarily protects the printed circuit board (pcb) components from particle contamination, but it also provides additional structure for each of the soldered joints on the boards. 3.2.2Thermal EnvironmentThe intended positioning of the imager is inside the cubesat with only the front lens exposed directly to the environment. The absolute temperature inside the cubesat is expected to range from –30 to 60 ◦C based on previous missions. Internal temperature will also fluctuate throughout orbit based on the processing load of electronics on-board. Both the controlling electronics and the imager itself are sensitive to thermal extremes and gradients. With a limited power budget, only passive thermal solutions were considered to mitigate potential thermal issues. First, optical train components machined from aluminum were anodized black, see Figure 11 (left). This helps to give consistency in thermal expansion along the train and enables passive thermal conductance.31 The platform is mounted to the satellite bus via dampers, Figure 11 (center). This not only helps in mechanical damping, as mentioned above, but also thermally isolates the imager from the extremes experienced by the outer frame. Finally, thermally conductive straps, Figure 11 (right), were added to link specific heat sources, such as processing chips, to heat sinks such as the platform. In this way, excessive heat can be transported away from the source and be more evenly distributed along the imager components. 3.2.3VacuumVacuum conditions in space present a challenging constraint, particularly when using COTS components. Often raw materials are not provided in product datasheets, nor are they always available upon request due to proprietary concerns of the companies selling them. Understanding and selecting materials is critical for sending optical components to space. If poorly chosen materials, such as 3D printed plastics, are used in the design, they will outgas particles that can condense onto surfaces such as the imager lens or detector. This can quickly degrade image quality. Extensive testing has been done on common manufactured materials, and lists of space-approved or recommended materials are readily available to the public. Obtaining quality materials and controlling manufacturing processes are important, but so is protecting components from contamination during assembly and testing. Any work with the final flight model should be done in a cleanroom or flowbench to limit the exposure of contaminating particles in the environment. All components should be cleaned and immediately bagged. Ultrasonic baths, Figure 12, with varying solutions of grease-removing soap and ethanol work very well for cleaning sensitive parts. Some components, the lens objectives in particular, are too difficult to clean without disassembly. They are heavily loaded with grease for adjustable settings — an advantage when used here on Earth. It is also quite difficult to identify every material and protect the hundreds of components on the pcbs, for example. In this case, conformal coating (Figure 12) can be applied in order to reduce degradation and the chances of particles causing short circuiting. Conformal coating was added to the COTS detector boards and the on-board computer that controls the HSI. A final consideration is in the abrupt transition at launch from ambient Earth conditions to vacuum at LEO. In this case, it is necessary for all enclosed volumes to vent at acceptable rates. Venting holes were drilled in both the objectives and slit tube in accordance with European Cooperation for Space Standardization (ECSS) recommendations32 - see Figure 13. These holes are aligned with the center groove of the imager platform to limit any stray light from entering the optics through venting holes. 4.CALCULATING EXPECTED IMAGER PERFORMANCEThe performance of a hyperspectral imager can be characterized by its spectral, spatial, and temporal resolution, its spectral range, and its field of view. In this section we demonstrate calculations of key imager parameters, such as the bandpass and signal-to-noise ratio (SNR), and summarize component trade-offs easily made with available COTS components. 4.1Spectral BandpassThe theoretical spectral bandpass, or the Full-Width-Half-Maximum (FWHM), can be calculated as,20 where a = 3333.33 nm is the groove spacing, the incident angle is α = 0, k = 1 refers to the first spectral order, f1 = 50 mm is the focal length of the front lens, and the slit width is w = 50 µm. This gives a theoretical bandpass of BP = 3.33 nm which occupies nx = w’/Δp ≈ 9 spectral pixels. The actual measured instrument bandpass is more precisely determined during spectral calibration. 4.2Signal-to-Noise RatioThe hyperspectral imager is a passive instrument and requires illumination of its target from outside sources like the sun. One critical performance factor is the signal-to-noise ratio (SNR) which is highly dependent on the light-source, imager size, optical elements, and chosen image sensor. As an example for the design and characterization of the SNR performance as relevant to the HYPSO-1 mission, the radiance of an open water scene, i.e. dark target, is used as the light-source. When looking from space at a dark target, such as the ocean, the signal from the surface that reaches the instrument is limited and is greatly affected by the atmosphere in the optical path. The total radiance at ToA reaching the sensor for this type of scene can typically be in the range of 0.004 – 0.06 Wm–2sr–1nm–1 in the spectrum of 400 – 800 nm, decreasing towards the red wavelengths.33 The water-leaving radiance usually constitutes a mere 10% of the total ToA radiance, as the signal sensed at ToA is significantly augmented in the optical path due to atmospheric effects.34 It is recommended that a high SNR at ToA within the VIS-NIR wavelengths should be obtained for ocean color applications.1,35,36 The SNR is wavelength dependent and can be calculated for a hyperspectral image pixel as: where is the total photon flux per magnified slit image corresponding to a spectral band, s is the scale factor, Qe is the quantum efficiency, Δt is the exposure time, bx and by are the number of binning operations in spectral and spatial domains, idark is the dark current, and eread is the read-out noise. The explanation and assumption of this equation is documented in.37 It is clear that SNR approximately increases proportionally to the square root of exposure time (), photon-flux () and binning operations ( and ). The variables needed to calculate SNR are based on components selected in Section 3.1 and are summarized in Table 2. Since the spectral bandpass occupies nx pixels already, we can increase the SNR by binning up to bx = nx pixels while not losing the bandpass. Table 2.Variables needed to calculate the SNR of a hyperspectral image pixel based on the presented optical design and the IMX249 imaging sensor. Several variables are based on assumption and are denoted with a *. Squared pixels are assumed.
Table 3 shows calculated values of the SNR applying the source radiance based on33 and the variables in Table 2. This shows that binning may be necessary at higher wavelengths in order to achieve a higher SNR. Binning can be adjusted in software on-board to help with downlinking data or it can be accomplished in post-processing once on the ground. Table 3.Signal to Noise Ratio (SNR) calculations of the HSI using a Sony IMX249 CMOS. Exposure time is set at Δt = 50 ms, i.e. a frame rate of about 20. Slit width is w = 50 μm and f/2.8. The multiplication factor is bx = by = 1 corresponds to a spectrogram binning window of 1 × 1 pixels which does not utilize the full image bandpass BP ≈ 3.33 nm. bx = 9 and by = 1 has a window of 9 × 1 pixels and matches the bandpass BP ≈ 3.33 nm. Correspondingly, bx = 9 and by = 9 has a window of 9 × 9 pixels where BP = 3.33 nm and 9 spatial pixels are merged.
4.3Spatial ResolutionUsing a f0 = 50 mm focal length lens for the front optics at an altitude of H = 500 km, the instantaneous optical resolution in flight direction is δx = (H · w)/f0 = 500 m at nadir. The optical resolution normal to the flight direction is δy = (H · h)/(f0 · 1200) = 58.6 m. Total swath width is δy · 1200 = 70.32 km. If by = 9 pixels are binned along the sensor height, then an area of 500 × 527 m2 is obtained at ground level. Binning bx = 9 pixels in the spectral direction will result in an image bandpass of 3.33 nm. Note that more binning operations will increase the image throughput and does not affect the spatial resolution in the image. For a hyperspectral imager on a platform moving at speed υ, then the spatial resolution of a frame is Δx = δx + υ · Δt in the flight direction. For Δt = 50 ms and a speed of υ = 7.6 km/s, the along-track spatial resolution is Δx = 500 m + 7.6 km/s · 50 · 10–3 ms = 880 m per frame. Instead of fixed at nadir, the camera can be smoothly rotated backwards (by a satellite slew maneuver) with the flight direction. With the camera rotating at an angular velocity of ω = –0.7025 deg/s with respect to flight direction, then the along-track ground distance between pixels is (υ + ω · H) · Δt = 49 m, while Δy = δy = 58.6 m stays the same. This effect can be used in post-processing of images for higher spatial resolution such that an image pixel could theoretically have a spatial resolution of up to 49 m × 58.6 m. 4.4Trade-offsThe physical parameters of the design allow for components to be changed out depending on the specific requirements of the mission. Next, we discuss hardware design trade-offs for slit and image sensor that can be swapped out and the effects on performance by binning pixels. 4.4.1Slit SizeA slit width of w = 50 μm is chosen for HYPSO-1 because it provides a good compromise between SNR, spectral resolution, and along-track spatial resolution, as indicated in Table 4. This allows setting higher frame rate, i.e. shorter exposure time, to obtain good along-track spatial resolution and sufficient SNR. For a fixed focal length and f-number, a smaller slit width w will provide better instantaneous optical resolution, along-track spatial resolution and spectral resolution but lower SNR. Practically, to achieve sufficient SNR means that a longer exposure time is needed, e.g. the actual spatial resolution will be worse than it may have been designed for. Table 4.Comparison of performance for versions for different slit width and number of pixels occupied per BP at fixed focal length f/2.8, slit height h = 7 mm and exposure time of Δt = 30 ms.
A larger slit height, h, will provide a larger swath width, but is limited by the image sensor dimensions, e.g. h’ cannot be greater than the height of the image sensor plane. For example, to match the sensor height exactly, the slit height should be h = 7.1 mm to provide a swath width of 71 km. 4.4.2Image sensorChoice of image sensors can impact the SNR performance of the imager across the available spectral range. It is also important to have high saturation capacity, low noise, and high quantum efficiency such that the SNR is sufficient. For an arbitrarily chosen target to be observed, the saturation capacity sets a limit on longer exposure times (e.g. a higher frame rate must be set) and larger aperture and slit dimensions could overfill the image sensor with light if the target source is too bright. A comparison between the chosen SONY IMX249 sensor and CMOS CMV2000 UI-3360CP-NIR sensor is shown in Table 5. The latter gives better performance in the NIR region but is worse in the blue-green part of the spectrum with overall lower quantum efficiency and higher noise. If the SNR in the camera system is already high, the capability of a high frame rate is desired for obtaining high spatial resolution, bearing in mind that this also may increase the data size considerably. Table 5.Comparison of performance for versions for different sensors at exposure time of Δt = 30 ms, bx = 9, by = 1, fixed slit width w = 50 µm, slit height h = 7 mm and f-number f/2.8.
4.4.3BinningIf binning more than bx = nx pixels then spectral bandpass becomes worse, e.g. bx = 2 · nx will result in approximately 2 · BP. If a smaller slit width is desired, then it is possible to artificially increase SNR by binning pixels by more than nx at the cost of spectral bandpass. For example a slit width of w = 10 µm gives a bandpass of BP = 0.67 nm that occupies nx = 2 pixels. With binning of bx = 18 this gives an effective bandpass of 9 · 0.67 nm = 6.03 nm while artificially increasing SNR from 62.8 at 500 nm to 266.4 at exposure time Δt = 30 ms, while keeping the along-track optical resolution at δx = 100 m. Binning along the sensor height, by, also increases SNR proportionally with at the cost of spatial resolution but is not recommended in mission such as HYPSO-1 unless a reduction in data size is necessary. 5.CONCLUSIONSIn conclusion, we have demonstrated how a transmission grating hyperspectral imager can be miniaturized and adapted for use on CubeSat missions. Our design prioritizes COTS components, when possible, and we include recommendations for modifying those components for operation in space. The imager is designed to observe 124 spectral bands in the visible to near infrared wavelength range of 400 – 800 nm with a bandpass of 3.33 nm. We plan to achieve a 49 m × 60 m ground resolution with a swath of 70 km at a sun-synchronous orbit of 500 km. The hyperspectral imager payload weighs a total of 1.6 kg (excluding electronics) and is designed to fit in about 4U of a 6U cubesat. This particular design allows for customization through component selection. We present several trade-off designs and compare their resulting expected SNR. Trade-offs illustrated are changing slit dimensions and imager detectors. Further work will be done on trade-off analysis, calibration, and correction of the spectrograms. ACKNOWLEDGMENTSThe work is partly sponsored by the Research Council of Norway through the Centre of Excellence funding scheme, project number 223254 (NTNU-AMOS) and IKTPLUSS project MASSIVE with project number 270959 (NFR), the Norwegian Space Agency, and the European Space Agency (PRODEX - 4000132515). The authors would especially like to thank the HYPSO mechanics team including Martine Hjertenæs, Glenn Angell, Tord Hansen Kaasa, Tuan Tran, Henrik Galtung, Amund Gjersvik, João Fortuna, Marie Henriksen, and Evelyn Honoré-Livermore for their help with design work, countless prototypes, and documentation of the process. Additionally, special thanks to NanoAvionics Corp. and SMAC Groupe MontBlanc Technologies for their design help and open communication throughout the process. REFERENCES
“Reports of the International Ocean Colour Coordinating Group, IOCCG: International Ocean-Colour Coordinating Group,”
Mission Requirements for Future Ocean-Colour Sensors, 13 Dartmouth, Canada(2012). Google Scholar
Lubac, B., Loisel, H., Guiselin, N., Astoreca, R., Felipe Artigas, L., and Mériaux, X.,
“Hyperspectral and multispectral ocean color inversions to detect phaeocystis globosa blooms in coastal waters,”
Journal of Geophysical Research, 113
(C6),
(2008). https://doi.org/10.1029/2007JC004451 Google Scholar
Clark, M. L.,
“Comparison of simulated hyperspectral hyspiri and multispectral landsat 8 and sentinel-2 imagery for multi-seasonal, regional land-cover mapping,”
Remote Sensing of the Environment, 200 311
–325
(2017). https://doi.org/10.1016/j.rse.2017.08.028 Google Scholar
Marshall, M. and Thenkabail, P.,
“Advantage of hyperspectral eo-1 hyperion over multispectral ikonos, geoeye-1, worldview-2, landsat etm+, and modis vegetation indices in crop biomass estimation,”
ISPRS Journal of Photogrammetry and Remote Sensing, 108 205
–218
(2015). https://doi.org/10.1016/j.isprsjprs.2015.08.001 Google Scholar
Strese, H. and Maresi, L.,
“Technology developments and status of hyperspectral instruments at the european space agency,”
(2019). https://doi.org/10.1117/12.2533129 Google Scholar
Transon, J., d’Andrimont, R., Maugnard, A., and Defourny, P.,
“Survey of hyperspectral earth observation applications from space in the sentinel-2 context,”
Remote Sensing, 10
(3),
(2018). https://doi.org/10.3390/rs10020157 Google Scholar
Sweeting, M. N.,
“Modern small satellites-changing the economics of space,”
in Proceedings of the IEEE,
343
–361
(2018). Google Scholar
Esposito, M. and Zuccaro Marchi, A,
“In-orbit demonstration of the first hyperspectral imager for nanosatellites,”
in Proc. SPIE vol. 11180 International Conference on Space Optics — ICSO 2018,
(2019). https://doi.org/10.1117/12.2535991 Google Scholar
Zuccaro Marchi, A., Maresi, L., and Taccola, M.,
“Technologies and designs for small optical missions,”
in Proc. SPIE vol. 11180 International Conference on Space Optics — ICSO 2018,
(2019). https://doi.org/10.1117/12.2535990 Google Scholar
Hill, S. L. and Clemens, P.,
“Miniaturization of high spectral spatial resolution hyperspectral imagers on unmanned aerial systems,”
in Proc. SPIE vol. 9482 Next-Generation Spectroscopic Technologies VIII,
(2015). Google Scholar
Headwall Photonics,
“Hyperspectral sensors: Micro-hyperspec® datasheet,”
(2020) https://cdn2.hubspot.net/hubfs/145999/June%202018%20Collateral/MicroHyperspec0418.pdf Google Scholar
Praks, J., Niemelä, P., Näsilä, A., Kestilä, A., Jovanovic, N., Riwanto, B., Tikka, T., Leppinen, H., Vainio, R., and Janhunen, P.,
“Miniature spectral imager in-orbit demonstration results from aalto-1 nanosatellite mission,”
in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium,
1986
–1989
(2018). Google Scholar
Malan, D. F., Palum, A., Dean, B., Rotteveel, J., and Stanton, D.,
“Scalable cubesat earth observation payloads, born from international collaboration,”
in 69th International Astronautical Congress (IAC),
(2018). Google Scholar
Bender, H. A., Mouroulis, P., Dierssen, H. M., Painter, T. H., Thompson, D. R., Smith, C. D., Gross, J., Green, R. O., Haag, J. M., Van Gorp, B. E., and Diaz, E.,
“Snow and water imaging spectrometer: mission and instrument concepts for earth-orbiting cubesats,”
Journal of Applied Remote Sensing, 12
(04),
(2018). https://doi.org/10.1117/1.JRS.12.044001 Google Scholar
Sigernes, F., Syrjasuo, M., Storvold, R., Fortuna, J., Grotte, M. E., and Johansen, T. A.,
“Do it yourself hyperspectral imager for handheld to airborne operations,”
Optics Express, 26
(5), 6021
–6035
(2018). https://doi.org/10.1364/OE.26.006021 Google Scholar
Sigernes, F., Lorentzen, D. A., Heia, K. H., and Svenøe, T.,
“Multipurpose spectral imager,”
Applied Optics, 39
(18), 3143
–3153
(2000). https://doi.org/10.1364/AO.39.003143 Google Scholar
Sigernes, F.,
“Hyper spectral imaging,”
AGF-331: Remote Sensing and Spectroscopy – lecture notes 1, University Centre in Svalbard – UNIS,
(2007). Google Scholar
Johnstone, A.,
“CP-CDS-R14 cubesat design specification (1u – 12u),”
The CubeSat Program, Cal Poly SLO, 14
(2020). Google Scholar
European Space Agency - TEB,
“TEC-SY/128/2013/SPD/RW tailored ecss engineering standards for in-orbit demonstration cubesat projects,”
3
(2016). Google Scholar
Eismann, M. T.,
(2012). https://doi.org/10.1117/3.899758 Google Scholar
Faria, M., Pinto, J., Py, F., Fortuna, J., Dias, H., Martins, R., Leira, F., Johansen, T. A., Sousa, J., and Rajan, K.,
“Coordinating uavs and auvs for oceanographic field experiments: Challenges and lessons learned,”
in 2014 IEEE International Conference on Robotics and Automation (ICRA),
6606
–6611
(2014). Google Scholar
, “Edmund Optics, 50mm c series vis-nir fixed focal length lens: Step download.,”
(2020) https://www.edmundoptics.com/p/50mm-c-series-vis-nir-fixed-focal-length-lens/22385/ Google Scholar
, “Thorlabs, S50rd - dia 1in mounted slit, 50 micrometer wide, 3 mm long: Step download.,”
(2020) https://www.thorlabs.com/thorproduct.cfm?partnumber=S50RD Google Scholar
, “Thorlabs, Sm1a10 - adapter with external sm1 threads and internal c-mount threads, 4.1 mm spacer: Step download.,”
(2020) https://www.thorlabs.com/thorproduct.cfm?partnumber=SM1A10 Google Scholar
, “Edmund Optics, 300 grooves, 25mm sq, 17.5deg blaze angle grating: Technical images.,”
(2020) https://www.edmundoptics.com/p/300-grooves-25mm-sq-175deg-blaze-angle-grating/10092/ Google Scholar
, “IDS Imaging Development Systems GmbH, Ui-5261se rev. 4 data: Ui-5261se-m-gl_rev_4 data sheet,”
(2021) https://en.ids-imaging.com/store/ui-5261se-rev-4.html Google Scholar
Sigernes, F., Dyrland, M., Peters, N., Lorentzen, D. A., Svenøe, T., Heia, K., Chernouss, S., Deehr, C. S., and Kosch, M.,
“The absolute sensitivity of digital colour cameras,”
Optics Express, 17
(22),
(2009). https://doi.org/10.1364/OE.17.020211 Google Scholar
Sigernes, F., Holmes, J., Dyrland, M., Lorentzen, D., Chemous, S., Svinyu, T., Moen, J., and Deehr, C.,
“Absolute calibration of optical devices with a small field of view,”
Journal of Optical Technology, 74
(10),
(2007). https://doi.org/10.1364/JOT.74.000669 Google Scholar
Høye, G., Løke, T., and Fridman, A.,
“Method for quantifying image quality in push-broom hyperspectral cameras,”
Optical Engineering, 54
(5),
(2015). https://doi.org/10.1117/1.OE.54.5.053102 Google Scholar
Skauli, T,
“Feasibility of a standard for full specification of spectral imager performance,”
Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2017, 10213 33
–44 International Society for Optics and Photonics, SPIE
(2017). Google Scholar
Montemurro, L., Zanetti, F., Simone, A., Schillaci, T., Castronuovo, M., Katsir, D., and Shabtai, K.,
“Black coatings for combined stray light and thermal passive management for the challenging environmental conditions of solar orbiter,”
in Proc. SPIE vol. 11180, International Conference on Space Optics — ICSO 2018,
(2019). https://doi.org/10.1117/12.2536097 Google Scholar
European Cooperation for Space Standardization,
“ECSS-E-ST-33-01C rev.2 active ecss standards: Mechanisms,”
(2019). Google Scholar
Gao, B.-C., Montes, M. J., Ahmad, Z., and Davis, C. O.,
“Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,”
Applied Optics, 39 887
–896
(2000). https://doi.org/10.1364/AO.39.000887 Google Scholar
Franz, B. A., Bailey, S. W., Werdell, P. J., and McClain, C. R,
“Sensor-independent approach to the vicarious calibration of satellite ocean color radiometry,”
Applied Optics, 46 5068
–5082
(2007). https://doi.org/10.1364/AO.46.005068 Google Scholar
Qi, L., Lee, Z., Hu, C., and Wang, M.,
“Requirement of minimal signal-to-noise ratios of ocean color sensors and uncertainties of ocean color products,”
Journal of Geophysical Research: Oceans, 122 2595
–2611
(2017). Google Scholar
Hu, C., Feng, L., Lee, Z., Davis, C. O., Mannino, A. G., McClain, C. R., and Franz, B. A.,
“Dynamic range and sensitivity requirements of satellite ocean color sensors: learning from the past,”
Applied Optics, 51
(25), 6045
–6062
(2012). https://doi.org/10.1364/AO.51.006045 Google Scholar
Skauli, T.,
“Sensor noise informed representation of hyperspectral data, with benefits for image storage and processing,”
Opt. Express, 19 13031
–13046
(2011). https://doi.org/10.1364/OE.19.013031 Google Scholar
|