PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave scanning radar. The MMW radar system employs a spinning antenna to generate a fan-shaped scanning pattern of the entire scene. The beams formed this way provide all weather 3D distance measurements (range/azimuth display) of objects as they appear on the ground. The beam width of the antenna and its side lobes are optimized to produce the best possible resolution even at distances of up to 15 Kms. To create a full 3D data set the fan-pattern is tilted up and down with the help of a controlled stepper motor. For our experiments we collected data at 0.1 degrees increments while using both bi-static as well as a mono-static antennas in our arrangement. The data collected formed a stack of range-azimuth images in the shape of a cone. This information is displayed using our high-end 3D visualization engine capable of displaying high-resolution volumetric models with 30 frames per second. The resulting 3D scenes can then be viewed from any angle and subsequently processed to integrate, fuse or match them against real-life sensor imagery or 3D model data stored in a synthetic database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopters are widely used for operations close to terrain such as rescue missions; therefore all-weather capabilities are
highly desired. To minimize or even avoid the risk of collision with terrain and obstacles, Synthetic Vision Systems
(SVS) could be used to increase situational awareness. In order to demonstrate this, helicopter flights have been
performed in the area of Zurich, Switzerland
A major component of an SVS is the three-dimensional (3D) depiction of terrain data, usually presented on the primary
flight display (PFD). The degree of usability in low level flight applications is a function of the terrain data quality.
Today's most precise, large scale terrain data are derived from airborne laser scanning technologies such as LIDAR
(light detection and ranging). A LIDAR dataset provided by Swissphoto AG, Zurich with a resolution of 1m was used.
The depiction of high resolution terrain data consisting of 1 million elevation posts per square kilometer on a laptop in
an appropriate area around the helicopter is challenging. To facilitate the depiction of the high resolution terrain data, it
was triangulated applying a 1.5m error margin making it possible to depict an area of 5x5 square kilometer around the
helicopter.
To position the camera correctly in the virtual scene the SVS had to be supplied with accurate navigation data. Highly
flexible and portable measurement equipment which easily could be used in most aircrafts was designed.
Demonstration flights were successfully executed in September, October 2005 in the Swiss Alps departing from Zurich.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integrity monitor for synthetic vision systems provides pilots with a consistency check between stored Digital Elevation Models (DEM) and real-time sensor data. This paper discusses the implementation of the Shadow Detection and Extraction (SHADE) algorithm in reconfigurable hardware to increase the efficiency of the design. The SHADE algorithm correlates data from a weather radar and DEM to determine occluded regions of the flight path terrain. This process of correlating the weather radar and DEM data occurs in two parallel threads which are then fed into a disparity checker. The DEM thread is broken up into four main sub-functions: 1) synchronization and translation of GPS coordinates of aircraft to the weather radar, 2) mapping range bins to coordinates and computing depression angles, 3) mapping state assignments to range bins, and 4) shadow region edge detection. This correlation must be done in realtime; therefore, a hardware implementation is ideal due to the amount of data that is to be processed. The hardware of choice is the field programmable gate array because of programmability, reusability, and computational ability. Assigning states to each range bin is the most computationally intensive process and it is implemented as a finite state machine (FSM). Results of this work are focused on the implementation of the FSM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past Jeppesen has built and distributed worldwide terrain models for several Terrain Awareness and Warning
Systems (TAWS) avionics clients. The basis for this model is a 30 arc-second NOAA Globe dataset with higher
resolution data used where available (primarily in the US). On a large scale however these terrain models have a 900m
(3000ft) resolution with errors that can often add up to 650m (1800ft) vertically. This limits the use of these databases to
current TAWS systems and is deemed unusable for other aviation applications like SVS displays that require a more
resolute and accurate terrain model.
To overcome this deficiency, the target of this project was to develop a new worldwide terrain database providing a
consistent terrain model that can be used by current (TAWS) and future applications (e.g. 2D moving maps, vertical
situation displays, SVS).
The basis for this project is the recently released SRTM data from NGA that provides a more resolute, accurate and
consistent worldwide terrain model. The dataset however has holes in the peak and valley regions, desert, and very flat
areas due to irrecoverable data capture issues. These voids have been filled using new topography algorithms developed
in this project.
The error distribution of this dataset has been analyzed in relation to topography, acquisition method and other factors.
Based on this analysis, it is now possible to raise the terrain a certain amount, such that it can be guaranteed that only a
certain number of real terrain points are higher than the data stored in the terrain database. Using this method, databases
for designated confidence levels of 10-3, 10-5 and 10-8 - called TerrainScape level 1 - 3 - have been generated.
The final result of the project is a worldwide terrain database with quality factors sufficient for use in a broader range of
civil aviation applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successful integration and the ultimate adoption of 3D Synthetic Vision (SV) systems into the flight environment as a cockpit aid to pilot situational awareness (SA) depends highly on overcoming two primary engineering obstacles: 1) storing on-board terrain databases with sufficient accuracy, resolution and coverage areas; and 2) achieving real-time, deterministic, accurate and artifact-free 3D terrain rendering. These combined elements create a significant, inversely-compatible challenge to deployable SV systems that has not been adequately addressed in the realm of proliferous VisSim terrain-rendering approaches. Safety-critical SV systems for flight-deployed use, ground-control of flight systems such as UAVs and accurate mission rehearsal systems require a solution to these challenges.
This paper describes the TerraMetrics TerraBlocks method of storing wavelet-encoded terrain datasets and a tightly-coupled 3D terrain-block rendering approach. Large-area terrain datasets are encoded using a wavelet transform, producing a hierarchical quadtree, powers-of-2 structure of the original terrain data at numerous levels of detail (LODs). The entire original raster terrain mesh (e.g., DTED) is transformed using either lossless or lossy wavelet transformation and is maintained in an equirectangular projection. The lossless form retains all original terrain mesh data integrity in the flight dataset. A side-effect benefit of terrain data compression is also achieved.
The TerraBlocks run-time 3D terrain-block renderer accesses arbitrary, uniform-sized blocks of terrain data at varying LODs, depending on scene composition, from the wavelet-transformed terrain dataset. Terrain data blocks retain a spatially-filtered depiction of the original mesh data at the retrieved LOD. Terrain data blocks are processed as discrete objects and placed into spherical world space, relative to the viewpoint. Rendering determinacy is achieved through terrain-block LOD management and spherical rendering geometry.
This research was pursued in part under contract to the NASA Langley Research Center, Aviation Safety Program (AvSP). A successful working proof-of-principle demonstration of the TerraBlocks 3D terrain-rendering method has been produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter
technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many airborne imaging systems contain two or more sensors, but they typically only allow the operator to view the output of one sensor at a time. Often the sensors contain complimentary information which could be of benefit to the operator and hence there is a need for image fusion. Previous papers by these authors have described the techniques available for image alignment and image fusion. This paper discusses the implementation of a real-time image alignment and fusion system in a police helicopter. The need for image fusion and the requirements of fusion systems to pre-align images is reviewed. The techniques implemented for image alignment and fusion will then be discussed. The hardware installed in the helicopter and the system architecture will be described as well as the particular difficulties with installing a 'black box' image fusion system with existing sensors. The methods necessary for field of view matching and image alignment will be described. The paper will conclude with an illustration of the performance of the image fusion system as well as some feedback from the police operators who use the equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some helicopters strike the power lines under the good weather conditions. Helicopter pilots sometimes have some difficulties to find such long and thin obstacles. We are developing an obstacle detection and collision warning system for civil helicopters in order to solve such problems. A color camera, an Infrared (IR) camera and a Millimeter Wave (MMW) radar are employed as sensors. This paper describes the results of different flight tests that show good enhancement of radar detection over 800m range for power lines. Additionally, we exhibit the processed fusion images that can assist the pilots in order to recognize the danger of the power lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in realtime and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for realtime performance. We are using a realtime implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the future, modern airliners will use enhanced-synthesic vision systems (ESVS) to improve aeronautical
operations in bad weather conditions. Before ESVS are effectively found aboard airliners, one must develop a
multisensor flight simulator capable of synthetizing, in real time, images corresponding to a variety of imaging
modalities. We present a real-time simulator called ARIS (Airborne Radar and Infrared Simulator) which is
capable of generating two such imaging modalities: a forward-looking infrared (FLIR) and a millimeter-wave
radar (MMWR) imaging system. The proposed simulator is modular sothat additional imaging modalities can
be added. Example of images generated by the simulator are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed Aperture Sensor (DAS) systems employ multiple sensors to obtain high resolution, wide angle video coverage of their local environment in order to enhance the situational awareness of manned and unmanned platforms. The images from multiple sensors must be presented to an operator in an intuitive manner and with minimal latency if they are to be rapidly interpreted and acted upon. This paper describes a display processor that generates a real-time panoramic video mosaic from multiple image streams and the algorithms for calibrating the image alignments. The architecture leverages the power of commercial graphics processing units (GPUs) in order to accelerate the image warping and display rendering, providing the operator with real-time virtual environment viewed through a virtual camera. The possibility of integrating high resolution imagery from a zoom sensor on a pan-tilt mount directly into the mosaic, introducing a 'foveal' region of high fidelity into the panoramic image is also possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past fifteen years, several research programs have demonstrated potential advantages of synthetic vision
technology for manned aviation. More recently, some research programs have focused on integrating synthetic vision
technology into control stations for remotely controlled aircraft. The contribution of synthetic vision can be divided into
two categories. The depiction of the environment and all relevant constraints contributes to the pilot's situation
awareness, while the depiction of the planned path and its constraints allows the pilot to control or monitor the aircraft
with high precision. This paper starts with an overview of the potential opportunities provided by synthetic vision
technology. A distinction is made between the presentation domain and the function domain. In the presentation
domain, the benefits are obtained from making the invisible visible. In the function domain, benefits are obtained from
the possibility to integrate data from the synthetic vision system into other functions. The paper continues with a number
of examples of situation awareness support concepts which have been explored in the current research. After this, the
potential contribution of synthetic vision technology to the manual control task is discussed and it is indicated how these
potential advantages will be explored in the next research phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Research Laboratory's Human Effectiveness Directorate supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. One research thrust explores the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, etc.), as well as numerous information updates via networked communication with other sources. This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting elements of interest within the video image. Secondly, it can assist the operator in maintaining situation awareness of an environment if the video datalink is temporarily degraded. Synthetic vision overlays can also serve to facilitate intuitive communications of spatial information between geographically separated users. This paper discusses results from a high-fidelity UAV simulation evaluation of synthetic symbology overlaid on a (simulated) live camera display. Specifically, the effects of different telemetry data update rates for synthetic visual data were examined for a representative sensor operator task. Participants controlled the zoom and orientation of the camera to find and designate targets. The results from both performance and subjective data demonstrated the potential benefit of an overlay of synthetic symbology for improving situation awareness, reducing workload, and decreasing time required to designate points of interest. Implications of symbology update rate are discussed, as well as other human factors issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Changes in military operations in recent years underscore changes in the requirements of military units. One of the largest underlying changes is the transformation from large-scale battles to quick-reaction mobile forces. There is also pressure to reduce the number of warfighters at risk in operations. One resultant need of these two factors is the increased need for situation awareness (SA); another is the use of unmanned vehicles, which increases the difficulty for the dismounted warfighter to maintain SA. An augmented reality (AR) system is a type of synthetic vision system that mixes computer-generated graphics (or annotations) with the real world. Annotations provide information aimed at establishing SA and aiding decision making. The AR system must decide what annotations to show and how to show them to ensure that the display is intuitive and unambiguous. We analyze the problem domain of military operations in urban terrain. Our goal is to determine the utility a synthetic vision system like AR can provide to a dismounted warfighter. In particular, we study the types of information that a warfighter is likely to find useful when working with teams of other warfighters. The problem domain is challenging because teammates may be occluded by urban infrastructure and may include unmanned vehicles operating in the environment. We consider the tasks of dynamic planning and deconfliction, navigation, target identification, and identification of friend or foe. We discuss the issues involved in developing a synthetic vision system, the usability goals that will measure how successful a system will be, and the use cases driving our development of a prototype system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through their ability to safely collect video and imagery from remote and potentially dangerous locations, UAVs have already transformed the battlespace. The effectiveness of this information can be greatly enhanced through synthetic vision. Given knowledge of the extrinsic and intrinsic parameters of the camera, synthetic vision superimposes spatially-registered computer graphics over the video feed from the UAV. This technique can be used to show many types of data such as landmarks, air corridors, and the locations of friendly and enemy forces. However, the effectiveness of a synthetic vision system strongly depends on the accuracy of the registration - if the graphics are poorly aligned with the real world they can be confusing, annoying, and even misleading.
In this paper, we describe an adaptive approach to synthetic vision that modifies the way in which information is displayed depending upon the registration error. We describe an integrated software architecture that has two main components. The first component automatically calculates registration error based on information about the uncertainty in the camera parameters. The second component uses this information to modify, aggregate, and label annotations to make their interpretation as clear as possible. We demonstrate the use of this approach on some sample datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Limited visibility has been cited as predominant causal factor for both Controlled-Flight-Into-Terrain (CFIT) and runway incursion accidents. NASA is conducting research and development of Synthetic Vision Systems (SVS) technologies which may potentially mitigate low visibility conditions as a causal factor to these accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Two experimental evaluation studies were performed to determine the efficacy of two concepts: 1) head-worn display application of SVS technology to enhance transport aircraft surface operations, and 2) three-dimensional SVS electronic flight bag display concept for flight plan preview, mission rehearsal and controller-pilot data link communications interface of flight procedures. In the surface operation study, pilots evaluated two display devices and four display modes during taxi under unlimited and CAT II visibility conditions. In the mission rehearsal study, pilots flew approaches and departures in an operationally-challenged airport environment, including CFIT scenarios. Performance using the SVS concepts was compared to traditional baseline displays with paper charts only or EFB information. In general, the studies evince the significant situation awareness and enhanced operational capabilities afforded from these advanced SVS display concepts. The experimental results and conclusions from these studies are discussed along with future directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increasing traffic density on the aerodrome surface due to the continuous worldwide growth in the number of flight operations does not only cause capacity and efficiency problems, but also increases the risk of serious incidents and accidents on the airport movement area. Of these, Runway Incursions are the by far most safety-critical. In fact, the worst-ever accident in civil aviation, the collision of two Boeing B747s on Tenerife in 1977 with 583 fatalities, was caused by a Runway Incursion. Therefore, various Runway Safety programs have recently been initiated around the globe, often focusing on ground-based measures such as improved surveillance. However, as a lack of flight crew situational awareness is a key causal factor in many Runway Incursion incidents and accidents, there is a strong need for an onboard solution, which should be capable of interacting cooperatively with ground-based ATM systems, such as A-SMGCS where available. This paper defines the concept of preventive and reactive Runway Incursion avoidance and describes a Surface Movement Awareness & Alerting System (SMAAS) designed to alert the flight crew if they are at risk of infringing a runway. Both the SVS flight deck displays and the corresponding alerting algorithms utilize an ED 99A/RTCA DO-272A compliant aerodrome database, as well as airport operational, traffic and clearance data received via ADS-B or other data links, respectively. The displays provide the crew with enhanced positional, operational, clearance and traffic awareness, and they are used to visualize alerts. A future enhancement of the system will provide intelligent alerting for conflicts caused by surrounding traffic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes flight tests of a Honeywell Synthetic Vision System (SVS) prototype operating in a hybrid-centered mode on a Primus EpicTM large format display. This novel hybrid mode effectively resolves some cognitive and perceptual human factors issues associated with traditional heading-up or track-up display modes. By integrating synthetic 3D perspective view with advanced Head-Up Display (HUD) symbology in this mode, the test results demonstrate that the hybrid display mode provides clear indications of current track and crab conditions, and is effective in overcoming flight guidance symbology collision and resultant ambiguity. The hybrid-centering SVS display concept is shown to be effective in all phases of flight and is particularly valuable during landing operations with a strong cross-wind. The recorded flight test data from Honeywell's prototype SVS concept at Reno, Nevada on board Honeywell Citation V aircraft will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When flying an airplane, landing is arguably the most difficult task a pilot can do. This applies to pilots of all skill levels particularly as the level of complexity in both the aircraft and environment increase. Current navigational aids, such as an instrument landing system (ILS), do a good job of providing safe guidance for an approach to an airfield. These aids provide data to primary flight reference (PFR) displays on-board the aircraft depicting through symbology what the pilot's eyes should be seeing. Piloting an approach under visual meteorological conditions (VMC) is relatively easy compared to the various complex instrument approaches under instrument meteorological conditions (IMC) which may include flying in zero-zero weather. Perhaps the most critical point in the approach is the transition to landing where the rate of closure between the wheels and the runway is critical to a smooth, accurate landing. Very few PFR's provide this flare cue information. In this study we will evaluate examples of flare cueing symbology for use in landing an aircraft in the most difficult conditions. This research is a part of a larger demonstration effort using sensor technology to land in zero-zero weather at airfields that offer no or unreliable approach guidance. Several problems exist when landing without visual reference to the outside world. One is landing with a force greater than desired at touchdown and another is landing on a point of the runway other than desired. We compare different flare cueing systems to one another and against a baseline for completing this complex approach task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic target recognition system has been assembled and tested at the Research Institute for Optronics and
Pattern Recognition in Germany over the last years. Its multisensorial design comprises off-the-shelf components: an
FPA infrared camera, a scanning laser radar und an inertial measurement unit. In the paper we describe several
possibilities for the use of this multisensor equipment during helicopter missions. We discuss suitable data processing
methods, for instance the automatic time synchronization of different imaging sensors, the pixel-based data fusion and
the incorporation of collateral information. The results are visualized in an appropriate way to present them on a cockpit
display. We also show how our system can act as a landing aid for pilots within brownout conditions (dust clouds
caused by the landing helicopter).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two topics are discussed in this paper. The first is the Integrated Multi-sensor Synthetic Imagery System
(IMSIS), being developed under an Army SBIR contract. The system updates on-board, pre-stored, terrain
elevation data with 3D terrain elevation sensor data (such as radar). The system also merges 2D image
contrast sensor data (such as infrared imagery) with the updated 3D terrain elevation data to render a
synthetic image of the terrain on the rotorcraft pilot's display. The second topic is the testing of a new flight
path marker, to show the pilot the predicted location of the aircraft with respect to the synthetic terrain (at
100m distance), as well as the predicted height above the terrain, the desired height above the terrain, and the
point on the terrain the aircraft is expected to fly over. The Altitude and ground Track Predicting Flight Path
Marker (ATP-FPM) symbol takes advantage of knowledge of terrain elevations ahead of the aircraft from a
synthetic vision system, such as IMSIS. In simulation, the maximum low altitude error and maximum ground
track error were both reduced by a factor of 2 with the ATP-FPM compared to the traditional instantaneous
flight path marker. Pilot-to-pilot variations in performance were reduced and workload decreased with the
ATP-FPM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PC based Flight Simulators (PC-FS) have been appeared as alternative training devices for pilot training to Enhanced
Flight Simulators due to their low cost and absolute availability. Visuals presented in PC-FS are adequate for aircraft
pilot training; this is not true for helicopter pilot training due to too low altitudes and speeds. Especially for hovering,
increased visual quality is required because of extremely low altitude (3-15 feet) and small movements. In this project,
two experiments were conducted by using simple PC-FS as a test platform and professional helicopter pilots as subjects
in order to evaluate the effect of hyper texturing on hovering performance. Results have revealed that the level of texture
resolution has no direct effect on hovering performance. Optimum texture resolution is dependent upon noticability,
recognizability and size of the 2D objects presented by textured image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future military imaging devices will have computational capabilities that will allow agile, real-time image enhancement. In preparing for such devices, numerous image enhancement algorithms should be studied. However, these algorithms need evaluating in terms of human visual performance using military-relevant imagery. Evaluating these algorithms through objective performance measures requires extensive time and resources. We investigated several subjective methodologies for down-selecting algorithms to be studied in future research. Degraded imagery was processed using six algorithms and then ranked along with the original non-degraded and degraded imagery through the method of paired comparisons and the method of magnitude estimation, in terms of subjective attitude. These rankings were then compared to objective performance measures: reaction times and errors in finding targets in the processed imagery. In general, we found associations between subjective and objective measures. This leads us to believe that subjective assessment may provide an easy and fast way for down-selecting algorithms but at the same time should not be used in place of objective performance-based measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pathway-in-the-sky displays enable pilots to accurately fly difficult trajectories. However, these displays may drive pilots' attention to the aircraft guidance task at the expense of other tasks particularly when the pathway display is located head-down. A pathway HUD may be a viable solution to overcome this disadvantage. Moreover, the pathway may mitigate the perceptual segregation between the static near domain and the dynamic far domain and hence, may improve attention switching between both sources. In order to more comprehensively overcome the perceptual near-to-far domain disconnect alphanumeric symbols could be attached to the pathway leading to a HUD design concept called 'scene-linking'. Two studies are presented that investigated this concept. The first study used a simplified laboratory flight experiment. Pilots (N=14) flew a curved trajectory through mountainous terrain and had to detect display events (discrete changes in a command speed indicator to be matched with current speed) and outside scene events (hostile SAM station on ground). The speed indicators were presented in superposition to the scenery either in fixed position or scene-linked to the pathway. Outside scene event detection was found improved with scene linking, however, flight-path tracking was markedly deteriorated. In the second study a scene-linked pathway concept was implemented on a monocular retinal scanning HMD and tested in real flights on a Do228 involving 5 test pilots. The flight test mainly focused at usability issues of the display in combination with an optical head tracker. Visual and instrument departure and approach tasks were evaluated comparing HMD navigation with standard instrument or terrestrial navigation. The study revealed limitations of the HMD regarding its see-through capability, field of view, weight and wearing comfort that showed to have a strong influence on pilot acceptance rather than rebutting the approach of the display concept as such.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.