Open Access
31 August 2022 Falcon Neuro: an event-based sensor on the International Space Station
Matthew G. McHarg, Richard L. Balthazor, Brian J. McReynolds, David H. Howe, Colin J. Maloney, Daniel O’Keefe, Rayomand Bam, Gabriel Wilson, Paras Karki, Alexandre Marcireau, Gregory Cohen
Author Affiliations +
Abstract

We report on the Falcon neuro event-based sensor (EBS) instrument that is designed to acquire data from lightning and sprite phenomena and is currently operating on the International Space Station. The instrument consists of two independent, identical EBS cameras pointing in two fixed directions, toward the nominal forward direction of flight and toward the nominal Nadir direction. The payload employs stock DAVIS 240C focal plane arrays along with custom-built control and readout electronics to remotely interface with the cameras. To predict the sensor’s ability to effectively record sprites and lightning, we explore temporal response characteristics of the DAVIS 240C and use lab measurements along with reported limitations to model the expected response to a characteristic sprite illumination time-series. These simulations indicate that with appropriate camera settings the instrument will be capable of capturing these transient luminous events when they occur. Finally, we include initial results from the instrument, representing the first reported EBS recordings successfully collected aboard a space-based platform and demonstrating proof of concept that a neuromorphic camera is capable of operating in the space environment.

1.

Introduction

Event-based sensors (EBS) differ from traditional imaging systems in that each pixel contains electronics that allow for asynchronous operation. They feature several advantages over their traditional frame-based counterparts, including lower power requirements and lower data volumes, wider dynamic ranges, and shorter latency periods.1 These features make EBS very attractive for space-based observations of high-speed phenomena. To date, the fastest space-based imagers are those used for lightning observations on the GOES Global Lightning Mapper.2

The potential to record high-frequency events with a suitably low bandwidth has applications in observing lightning at the tops of thunderstorms and sprites, which are similar electrical discharge phenomena in the mesosphere between altitudes of 50 and 90 km. Sprites are a type of transient luminous event (TLE) and are typically associated with the changing electric field above significantly powerful positive cloud-to-ground (CG) lightning strikes.3,4,5,6

The Falcon Neuro instrument was designed and built by faculty and cadets at the United States Air Force Academy (USAFA) and researchers at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University (WSU). It contains two EBS, together with science acquisition and data-handling controllers. Falcon Neuro was launched on December 21, 2021, as part of the Department of Defense Space Test Program Houston-7 (STP-H7) mission to the International Space Station. The installed Falcon Neuro experiment is shown in Fig. 1.

Fig. 1

The Falcon Neuro experiment on the STP-H7 platform is outlined in red. The two cameras (Nadir and RAM) are angled away from the viewer. Picture courtesy of NASA.

OE_61_8_085105_f001.png

The layout of this paper is as follows: a description of the instrument and its functionality is given in Sec. 2. Section 3 continues with an in-depth look at the laboratory testing and expected temporal response of the pixels to different temporal stimuli. Section 4 takes results from this testing along with a simple finite response model of each pixel to generate some simulated expected results of the EBS to different temporal changes in the field of view of the pixel. Finally in Sec. 5, we present the first on-orbit results from Falcon Neuro.

2.

Description of Instrument

Within its focal array, an EBS has independent, asynchronous photoreceptors that operate without a fixed exposure time.7,1 Each pixel contains analog circuitry that responds to changes in photocurrent as shown in Fig. 2 and only logs data (“registers an event”) when it detects a change in illumination that exceeds a user selectable threshold value.

Fig. 2

A schematic detailing the circuitry of a single EBS pixel. A logarithmic transimpedance amplifier converts photocurrent to voltage (Vp), increasing the dynamic range of the camera. Vp is amplified by the switched capacitor amplifier, and the output, Vdiff is compared to the pixel’s memorized reset level. An event is only registered if Vdiff increases or decreases by a predefined threshold level from the reset. If the change is an increase in illumination, the sensor registers an ON event. If the change is a decrease in illumination, the sensor registers an OFF event. With the combination of many EBS pixels in a focal array, an EBS is capable of operating without fixed exposure times and with relatively short refractory periods and low data rates. Picture from Lichtsteiner et al.8

OE_61_8_085105_f002.png

An EBS outputs data in a format commonly referred to as address-event representation (AER).1 This format generally consists of a N×4 matrix of data with rows that populate every time an event is registered. The columns in this format consist of a location stamp that tells where in the focal array an event was registered (“x” and “y” position), the time when the event was registered, and a binary value for the sign of illumination change (e.g., a “1” for a positive illumination change and “0” for a negative illumination change).

In contrast, traditional frame-based cameras synthesize the total irradiance from many or all pixels in their focal arrays together and within specified time intervals, which is dependent on the exposure time of the camera. In this way, they output the whole image of a scene—an array of values showing the total irradiance incident upon each pixel (often separated into different “channels” that correspond to sensitivity to different wavelengths, e.g., red, green, and blue).

Figure 3 illustrates a simple example of an EBS single pixel’s response to a time-changing illumination signal. The pixel responds to changes in the log of the photocurrent (illumination) by reporting an event when this value increases by a user-defined threshold since the pixel’s last reset. After registering an event, each pixel waits for a finite refractory period in order to prevent highly dynamic regions of a scene from dominating the camera’s readout bus. The pixel is again able to respond to subsequent signal changes after this refractory period. As shown in the plot, the refractory period can result in significant signal loss between consecutive events if the input changes very quickly. This value is theoretically adjustable from tens of μs to tens of ms. In Sec. 4, we explore this limitation (among others) that may influence Falcon Neuro output when recording sprite events.

Fig. 3

Simple pixel description: Each pixel reports an event when the change in the log of its incident illumination level increases or decreases by a predetermined threshold value since its last reset level. Green dots represent ON (increasing illumination) events, and red dots represent OFF (decreasing illumination). After each event, a finite refractory period is applied before the pixel reset, preventing highly dynamic regions of the scene from dominating readout bandwidth. As evidenced by the fast rising edge, signal loss during the refractory period may be significant when the input signal changes quickly.

OE_61_8_085105_f003.png

2.1.

Falcon Neuro Elements

Falcon Neuro comprises two independent, heavily-modified commercial-off-the-shelf (COTS) DAVIS 240C7 EBS focal plane arrays with custom optics and electronics (see Fig. 4). The DAVIS 240C is an EBS system developed and sold by iniVation. The two sensors are on fixed mounts pointing forwards (“Ram camera”) toward the limb of the Earth in the direction of the ISS’s travel, and down (“Nadir camera”) pointing toward the Earth and 20 deg to starboard to look past part of the ISS.

Fig. 4

Cutaway showing Falcon Neuro main component. A, power board; B, data manager board; C, FPGA camera board; D, RAM camera assembly; and E, Nadir camera assembly.

OE_61_8_085105_f004.png

Each camera assembly contains a DAVIS 240C7 focal plane array and a COTS Fujinon HF2518-12M-F1.8 25-mm focal length lens with a fixed focus at infinity. The cameras are controlled using an Intel Cyclone V SoC field programmable gate array (FPGA) developed by members of the Falcon Neuro team at WSU (see Fig. 5). Science data is stored on dedicated static random access memory (SRAM), and bias control and preprocessing algorithms on an embedded nonvolatile multimedia card (eMMC). Functionally, the FPGA board is divided into two parts: an FPGA and an ARM processor. The FPGA is responsible for interfacing with the event-based camera focal planes fetching the events, time-stamping them as they arrive from the cameras, and storing them in SRAM. The ARM processor is responsible for the command and data handling interface with the instrument manager unit.

Fig. 5

Block diagram showing major components of the WSU provided camera board.

OE_61_8_085105_f005.png

The pixels in each EBS operate asynchronously and are controlled by 20 independent bias current variables controlling various pixel parameters (e.g., contrast thresholds and refractory period). Control of these biases and sensor readout is passed from the ground to the manager unit, and are then passed to the camera unit FPGA. The FPGA then sets the biases and pixel parameters through low-level calls to the EBS. The asynchronous event stream contains the row coordinate, column coordinate, and polarity of the detected illumination change from a single pixel. The time-stamping component in the FPGA appends a timestamp with microsecond resolution to each event. This timestamped data is then passed through a hardware-implemented noise filter, which discards background and spurious noise events using an on-orbit configurable neighbor-support algorithm. The filtered event data is then stored in a hardware buffer.

External command and data handling (see Fig. 6) is performed independently of science acquisition activities by a 32-bit RISC-based microcontroller communicating with the host (STP-H7) data interface computer for experiments (DICE) via an RS-422 differential serial link. This allows uninterrupted data acquisition on the camera(s) for up to ~180 s at a time (limited by on-board SRAM size and download bandwidth) independent of state-of-health and housekeeping activities are performed. Commands are issued in real-time from the payload operations control center (POCC) at USAFA and propagate via bent-pipe through NASA’s Huntsville Operations Support Center using Telescience Resource Kit (TReK) applications. Science data are streamed back to the POCC via bent-pipe from the FPGA-controlled SRAM after each data acquisition, as there is insufficient bandwidth for a real-time downlink of science data. Housekeeping telemetry is received and displayed at the POCC in real time (except during science data streaming).

Fig. 6

Block diagram of the Falcon Neuro command and data handling path.

OE_61_8_085105_f006.png

2.2.

Optical Field of View

The DAVIS 240C focal planes contain 240×180  pixels that are 18.5  μm in size.7 The 25-mm Fujinon lens thus gives an instantaneous field of view (IFOV) of 7.4×104 radians/pixel. The overall field of view for the Nadir camera is 10.17×7.63  deg, and at a range of 420 km (the nominal altitude of the ISS), the spatial IFOV is 310 m/pixel or 73×55  km. The Nadir field of view has the long axis (10.17 deg) oriented along the nominal direction of flight of the ISS, while the Ram camera has the long axis oriented vertically.

Figure 7 shows the field of view of the two Falcon Neuro cameras (yellow) as well as the field of view of the NASA HD camera (blue) aboard the ISS.9 The Falcon Neuro Nadir camera is offset 20 deg to starboard of the direction of flight to avoid the ISS structure. The Falcon Neuro Ram camera is oriented to have the vertical field of view approximately centered on the limb of the earth to facilitate observations of sprites. For visual reference of the FOV size, the U.S. Great Lakes are pictured near the lower right edge of the rendering, and the distinctive shape of Lake Michigan is prominent. Despite small perturbations in the attitude of the ISS, the model used to generate Fig. 7 has proved invaluable for planning daily operations of the two Falcon Neuro cameras.

Fig. 7

Field of view of Falcon Neuro (yellow) and the NASA HD camera (blue) on the ISS.

OE_61_8_085105_f007.png

3.

Testing Temporal Response

Prior to the launch of the payload, testing was conducted to ensure that the Falcon Neuro payload had suitable performance to perform the mission. Because the goal of the mission is to examine sprites and lightning, we focused on testing to confirm the DAVIS 240C sensor’s capability to detect fast, large illumination changes. Its performance is discussed in this section. Based on pixel bandwidth tests reported in Ref. 1, we attempted to determine the frequency response limitations of the sensor. Effectively, the DVS photoreceptor acts as a low-pass filter, so the pixel does not respond instantaneously, and extremely fast illumination changes are suppressed in the sensor’s output. We conducted frequency response testing across multiple pixel bandwidth-bias (BW) settings, which is adjusted by changing the photoreceptor bias current.10

In order to determine the pixel bandwidth, we used the camera to record an LED driven by a function generator with a sinusoidal stimulus waveform. We used Java tools for AER (jAER),11 a publicly available Java-based software package designed to interface with DVS cameras, to visualize and record the data. For low frequencies, pixels exposed to the LED stimulus generated multiple events of each type per stimulus cycle. To determine an effective corner frequency, we recorded the sensor output for 30  s at discrete frequencies ranging from as low as 100 to as high as 1000 Hz, increasing the stimulus frequency in 10 Hz increments between each recording. We repeated the measurement for slow (BW = 3, an arbitrary quantification for the bandwidth-bias current settings), fast (BW = 7), and extremely fast (BW = 8) photoreceptor bandwidth-biases. The default bias setting for the Falcon Neuro instrument is BW = 5, midway between the results shown for BW = 3 and BW = 7.

Figure 8 shows the normalized event rate (NER) of the DAVIS 240C as a function of stimulus frequency. The NER is a measure of the total number of events generated over a set period of time and is calculated with Eq. (1). Effectively this is the total number of measured events, Ne normalized for a given number of stimulus cycles, Nc. The second line of the equation shows how the number of cycles, Nc is calculated: f is the stimulus frequency for a particular recording (Hz), ΔT is the duration of the recording(s), and Np is the number of pixels exposed to the stimulus. As a result, NER has units of events per pixel per stimulus cycle

Eq. (1)

NER=NeNcNc=fΔTNp.

Fig. 8

Effect of BW on NERs.

OE_61_8_085105_f008.png

Figure 8 shows that as the stimulus frequency increases past a corner frequency, the NER decreases. Two horizontal bars show the locations at which the NER=100% and NER=50%. As shown, a lower BW setting results in a lower corner frequency.

4.

Modeling Expected Results

In addition to verifying the performance of the flight unit is comparable to the stock DAVIS 240C camera, the tests described in Sec. 3 also have practical importance with respect to the goal of detecting sprites and lightning. As seen in Fig. 8, the maximum stimulus frequency to which the pixel can respond varies significantly with the user-defined photo-receptor bias. Although it is not possible to extract a precise corner frequency from these measurements, the value ranges from 100 Hz to nearly 400 Hz for the photo-receptor biases that were tested.

Lightning and sprites can occur on extremely fast timescales, as low as hundreds of μs for sprites12 and 300 ms for lightning.2 We need to obtain a reasonable estimate for how the sensor will respond to these events. This is crucial not only for predicting whether the sensor will be able to detect these phenomena at all, but also for understanding and interpreting the output when these events do occur. Reference 13 discusses many of the practical limitations of event cameras and described a pixel model as part of a simulation tool, v2e, which incorporates the most critical of these limitations including the finite, intensity dependent, photoreceptor bandwidth, and refractory period. In our predictive model, we included both of these parameters because these are the two sensor biases that directly influence the temporal response of the sensor.

Using the corner frequency results of Fig. 8 and the intensity dependent bandwidth model described by Hu et al.,13 we modeled the expected temporal response of the DAVIS 240C to predict sensor output in response to a recorded time varying signal from a sprite. The sprite illumination time-series was collected by a Phantom V2011 high-speed camera using a low persistence image intensifier recorded at a 100 kHz frame rate. To generate synthetic frames, v2e first performs a logarithmic compression of input frames (consistent with the DAVIS camera’s logarithmic photoreceptor), and then passes the signal through an intensity-dependent, first-order, and low-pass filter. The filter is intensity-dependent because the speed of the photoreceptor is proportional to the photocurrent itself under typical illumination levels bias settings.

To determine an appropriate baseline parameter for the low-pass filter, we observed the curves shown in Fig. 8. We compare the response of two different photoreceptor bandwidth-biases, BW = 3 (slower response) and BW = 7 (faster response), resulting in corner frequencies of ~100 and 300 Hz, respectively. These estimates were obtained by observing the frequency at which the NER drops below one event per pixel per stimulus cycle, and displayed by the solid vertical lines on the plot. While these frequencies may seem quite low for our desired application considering that sprites occur on much faster time scales, these estimates do not correspond directly to the reciprocal of the maximum effective frame rate as event cameras are not governed by a frame-based architecture. However, we can relate these estimated corner frequency values to a model parameter τ, which represents the time-constant of a first-order low-pass filter. Strictly speaking, this is not truly a time constant, as its value decreases with increased signal intensity. Nonetheless, we can obtain a baseline value using the relationship

Eq. (2)

τ=12πf,
where f is the observed corner frequency.

Applying Eq. (2), we obtain baseline time constant parameters of 1.6 ms and 530  μs respectively for the two different bandwidth-biases. This bias can be controlled over a wide range of values, so we evaluated both a fast and slow setting for comparison. Although the faster bias setting results in better signal response, it can also adversely affect noise rates throughout the entire array, so the slower setting may be more desirable in some cases. The time constant values correspond to the approximate time it takes the photoreceptor output to rise to 1/e (63%) of the peak value of a step impulse. Because the speed of the photoreceptor also increases with photocurrent,13 to appropriately leverage the pixel model we must relate this parameter to the expected illumination level on orbit. The corner frequency measurements were made in indoor (laboratory) lighting conditions in the midrange between that of dark sky to typical sprite maximum illumination. We assume that the baseline value of τ to corresponds roughly to the midpoint of the sprite time-series intensity dynamic range. Since the maximum digital number (DN) readout of the time-series is only about a factor of 34 times the noise floor, we multiply the baseline τ value by a factor of 17 to obtain a τdark value. Then, to model the response, the instantaneous time constant parameter is obtained by modulating τdark by the fraction of the noise-floor DN (DNdark) to the instantaneous readout (DNinst) according to the relationship

Eq. (3)

τinst=τdarkDNdarkDNinst.

The bandwidth of the low-pass filter increases monotonically with the DN readout, consistent with the model described in Ref. 13. The low-passed version of the signal is then obtained by taking the log of the DN readout at each time step, inserting τinst values into a first-order low-pass ordinary differential equation, and time-stepping the response

Eq. (4)

Y(t+dt)=dtτinst[log(DN(t))Y(t)]+Y(t),
where Y(t) is the response at time t, dt is the time step between samples, and the initial condition at Y(0) is the mean value of log(DNdark).

Figure 9 depicts simulation results obtained for both fast and slow photoreceptor Bandwidth-Bias settings in response to a recorded sprite time-series. Horizontal grid-lines are included for reference and represent a nominal contrast threshold criteria of 0.33 log change units. The single pixel illumination corresponding to the sprite event rises and falls in under 10 ms, with nearly all of the signal change of the rising edge occurring in under 1 ms. Even considering the low-pass filtering effect of the photoreceptor, the above simulation predicts that the faster bandwidth-bias setting still captures 97% of true signal change, whereas the slower bandwidth-bias setting results in a peak signal amplitude that only reaches 76% of the input stimulus.

Fig. 9

Simulation of DVS events for a single pixel in response to a sprite observation. The solid black trace is the light intensity time series at the pixel. The blue and red traces are the pixel response for two bandwidth-limited photoreceptor bias settings. The green dots represent individual events. The refractory period (here, 100  μs) creates a dead-time indicated by the red boxes, during which the contrast threshold criteria (for an event trigger) is not applied.

OE_61_8_085105_f009.png

Another parameter that should be considered when predicting sensor output to fast stimuli is the refractory period. After an event is triggered, the pixel’s change amplifier is held in reset for a finite, adjustable refractory period before it can respond to subsequent changes. This results in a “dead” time after each event during which contrast changes are not measured and thus no further events can be triggered.1 Refractory period is adjusted globally on the sensor by setting the refractory bias current. Actual refractory period measurements have not been reported, but the parameter is theoretically adjustable over a range tens of μs to tens of ms. To illustrate the potential impact, we consider a 100  μs refractory period in Fig. 9. During the fast rising edge of the sprite event, the 100  μs refractory period dead time reduces the number of events by a factor of two to three for this simulation. This suggests that the refractory period should always be set to the minimum (fastest) value; however, a longer refractory period has the benefit of reducing noise event rates that may otherwise saturate the sensor. Further refinement will be needed on orbit to determine optimal settings under different lighting conditions.

The predictions are important for refining bias settings during on-board collection periods. In order to maximize the amount of information encoded into the event stream, these results indicate it may be advantageous to pair a fast refractory period with a slightly slower bandwidth so that the low-passed version of the input signal continues changing over a longer time period, thus reducing the effect of the refractory period. As seen in the two response curves of Fig. 8, the value of the bandwidth-bias must be carefully selected because the slower response also results in a reduction in the peak amplitude of the signal reaching the pixel’s change detection logic. Additionally, because the photoreceptor speed depends strongly on illumination, and photometrically calibrated bandwidth measurements have not been reported for event cameras, the appropriate balance of these settings will need to be continuously refined across multiple collections from the flight instrument.

Fig. 10

Falcon Neuro data example from January 24, 2022, 20:10:28 (UT). HD image obtained from the ISS (full color) and Falcon Neuro (false color) are overlaid on a Google earth view. The eastern Honduras coastline and clouds can be seen in both images. Note that the Falcon Neuro image is displaced by 20 deg to starboard in order to avoid imaging structure on the ISS.

OE_61_8_085105_f010.png

5.

Initial Results

Falcon Neuro on the STP-H7 platform was installed on the ISS Columbus module and after functional checkout, commenced operations on January 11, 2022. Daily data collection concentrated on determining the pointing of the Nadir and Ram cameras with respect to the model developed shown in Fig. 7 (as the ISS does not fly with its coordinate system and hence Falcon Neuro’s boresights are precisely fixed with the velocity vector and local vertical). Falcon Neuro passed over Central America on January 24, 2022 at 20:10:28 (UT), recording data that was reconstituted into the image shown in Fig. 10. The town of Limón, Honduras, located on the eastern coast at latitude 15.89 and longitude 85.59 west, is marked on the coast of Honduras in Fig. 10.

The motion of the ISS relative to the Earth surface can be locally considered a translation. Hence, high-contrast ground features such as clouds, coasts, or lakes are detected sequentially by each row of pixels. We use this high degree of redundancy to reduce the impact of noise and generate a panorama from camera events.

We change the pixel coordinates of individual events to cancel the visual ground speed, as described in Ref. 14. This effectively shifts and coadds data corresponding to a feature on the surface of the Earth. This is similar in concept to the idea of frame stacking in a video file. Counting the number of events per pixel after the transformation yields a matrix (or gray level image). Sensor-wide noise flashes, easily visible in time-window renders of the events, contribute comparatively little to such images. For example, a feature that triggers an event on every row should theoretically produce 240 events to the count, whereas three flashes contribute only three events.

The gradient descent described in Ref. 14 diverges in our case. We instead estimate speed by evaluating the position of an easily recognizable object at two distinct times, avoiding the optimization process altogether.

The generated images have a high dynamic range. We render multiple versions of each image with different gamma corrections to reveal different amounts of features. Our image postprocessing pipeline consists of the following steps:

  • 1. Linearly normalize the image

    Eq. (5)

    p(x,y)=p(x,y)pminpmaxpmin,
    where p(x,y) is the pixel value with coordinates x,y and pmin (respectively pmax) is the minimum (respectively maximum) pixel value.

  • 2. Apply a gamma correction

    Eq. (6)

    p(x,y)=p(x,y)1/n,
    where n is 1 (no correction), 2, or 4.

  • 3. Apply a colormap. We map individual gray level values to colors to increase perceptual differences.

Figure 10 demonstrates the successful initial checkout of the Nadir camera. The data volume for the 30-s acquisition shown in Fig. 10 required by Falcon Neuro is 22 times smaller than that required by the ISS HD camera. Additionally, Fig. 8 shows that Falcon Neuro has 5  ms temporal response while the ISS HD camera has a temporal response of 33  ms (30 fps). Currently, similar operations are underway for the Ram camera. Once both cameras have been checked out, operations will commence to observe both lightning and sprites. We expect that the Nadir camera will be most useful for observing the propagation of lightning in cloud tops. Comparison of lightning data can be made with the global lightning mapper (GLM) on the NOAA GOES satellites,2 and the lightning imaging sensor based on the ISS.15 Sprite observations from the Ram camera can be compared with TLE measurements made by the atmosphere-space interactions monitor based on the ISS.16 Both lightning and TLE observations can be greatly aided by remote detection of lightning by systems such as the world-wide lightning network.17

6.

Summary

EBSs are attractive for space-based high-speed optical observations of events such as lightning and sprites. The Falcon Neuro instrument currently operating on the ISS is designed to observe lightning and sprite events in the mesosphere with characteristic timescales as low as 100  μs. Temporal response dynamics were tested in a laboratory setting to predict expected camera results. Initial on-orbit results are extremely promising and future work will compare these results with those of both ground and other space-based experiments.

Acknowledgments

STP-H7 Falcon Neuro was integrated and flown by the Department of Defense Space Test Program. We acknowledge AGI, an Ansys company, and its Educational Alliance Program for donating its Systems Tool Kit (STK) software which was used to generate the field of view schematic used in Fig. 7. Figure 1 is used courtesy of NASA.

References

1. 

P. Lichtsteiner, C. Posch and T. Delbruck, “A 128 × 128 120 dB 15μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, 43 (2), 566 –576 (2008). https://doi.org/10.1109/JSSC.2007.914337 IJSCBC 0018-9200 Google Scholar

2. 

S. D. Rudlosky et al., “Initial geostationary lightning mapper observations,” Geophys. Res. Lett., 46 (2), 1097 –1104 (2019). https://doi.org/10.1029/2018GL081052 GPRLAJ 0094-8276 Google Scholar

3. 

W. A. Lyons, “Sprite observations above the US High Plains in relation to their parent thunderstorm systems,” J. Geophys. Res.: Atmos., 101 (D23), 29641 –29652 (1996). https://doi.org/10.1029/96JD01866 Google Scholar

4. 

V. P. Pasko, “Red sprite discharges in the atmosphere at high altitude: the molecular physics and the similarity with laboratory discharges,” Plasma Sources Sci. Technol., 16 13 –29 (2007). https://doi.org/10.1088/0963-0252/16/1/S02 PSTEEU 0963-0252 Google Scholar

5. 

V. P. Pasko, “Recent advances in theory of transient luminous events,” J. Geophys. Res.: Space Phys., 115 (6), 1 –24 (2010). https://doi.org/10.1029/2009JA014860 Google Scholar

6. 

D. D. Sentman et al.,, “Preliminary results from the Sprites94 aircraft campaign: 1. Red sprites,” Geophys. Res. Lett., 22 (10), 1205 –1208 (1995). https://doi.org/10.1029/95GL00583 GPRLAJ 0094-8276 Google Scholar

7. 

C. Brandli et al., “A 240 × 180 130 dB 3 μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits, 49 (10), 2333 –2341 (2014). https://doi.org/10.1109/JSSC.2014.2342715 IJSCBC 0018-9200 Google Scholar

8. 

O. Chanrion et al.,, “The modular multispectral imaging array (MMIA) of the ASIM payload on the International Space Station,” Space Sci. Rev., 215 –28 (2019). https://doi.org/10.1007/s11214-019-0593-y SPSRA4 0038-6308 Google Scholar

9. 

C. F. S. Runco and C. Getteau, “High Definition Earth Viewing (HDEV) final report,” https://eol.jsc.nasa.gov/ESRS/HDEV/files/HDEV-Final-Report_20200715.pdf Google Scholar

10. 

B. McReynolds et al., A Test Methodology and Physics-Based Pixel Model for Neu-Romorphic Imagers, AFIT Center for Technical Intelligence(2019). Google Scholar

11. 

T. Delbruck, “Java tools for Address-Event Representation (AER) neuromorphic vision and audio sensor processing,” https://github.com/SensorsINI/jaer Google Scholar

12. 

H. C. Stenbaek-Nielsen et al., “High-speed observations of sprite streamers,” Surv. Geophys., 34 (6), 769 –795 (2013). https://doi.org/10.1007/s10712-013-9224-4 SUGEEC 0169-3298 Google Scholar

13. 

Y. Hu et al.,, “v2e: from video frames to realistic DVS events,” in Conf. Comput. Vision and Pattern Recognit., (2021). Google Scholar

14. 

G. Gallego, H. Rebecq and D. Scaramuzza, “A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation,” in IEEE/CVF Conf. Comput. Vision and Pattern Recognit., (2018). Google Scholar

15. 

H. Christian et al.,, “The lightning imaging sensor,” in NASA Conf. Publ., 746 –749 (1999). Google Scholar

16. 

S. S. Kristensen et al., “Atmosphere-space interactions monitor, instrument and first results,” in IGARSS 2019-2019 IEEE Int. Geosci. and Remote Sens. Symp., 8811 –8814 (2019). https://doi.org/10.1109/IGARSS.2019.8900301 Google Scholar

17. 

E. H. Lay et al., “Introduction to the world wide lightning location network (WWLLN),” Geophys. Res. Abstr., 7 02875 (2005). Google Scholar

Biography

Matthew G. McHarg received his PhD from the University of Alaska Fairbanks in 1993. He currently is a director of the Space Physics and Atmospheric Research Center at the United States Air Force Academy. His research interests include the study of electrical discharges and the use of commercial-off-the-shelf technology for space.

Brian J. McReynolds is a PhD student at ETH Zurich under the Institute of Neuroinformatics, Sensors Group. He received his BS degree in electrical engineering from the University of Virginia in 2007, his MS degree in engineering management from Oklahoma State University in 2014, and his MS degree in engineering physics from Air Force Institute of Technology in 2019. His research interests are centered around exploring DVS for use in scientific and space-based applications.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Matthew G. McHarg, Richard L. Balthazor, Brian J. McReynolds, David H. Howe, Colin J. Maloney, Daniel O’Keefe, Rayomand Bam, Gabriel Wilson, Paras Karki, Alexandre Marcireau, and Gregory Cohen "Falcon Neuro: an event-based sensor on the International Space Station," Optical Engineering 61(8), 085105 (31 August 2022). https://doi.org/10.1117/1.OE.61.8.085105
Received: 10 March 2022; Accepted: 2 August 2022; Published: 31 August 2022
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

Sensors

Optical engineering

Data acquisition

Field programmable gate arrays

Space operations

Imaging systems

Back to Top