KEYWORDS: Databases, Sensors, Visualization, 3D modeling, Data modeling, Computer simulations, Environmental sensing, Visual process modeling, Motion models, Systems modeling
One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex
urban environments. Simulation systems rely on accurate representation of these environments and require automated
software tools to construct the underlying 3D geometry and associated spectral and material properties that are then
formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research
(SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined
properties. These environments can be composed from a wide variety of source materials, including vector source data,
pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation
database. This intermediate representation can be easily inspected in the visible spectrum for content and organization
and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into
specific synthetic scene generation runtime formats, preserving the relationship between geometry and material
properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been
created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for
real-time use is currently being developed. This process supports significantly more complex target environments than
previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced
seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and
sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics
and viewer dynamics within the visual simulation into the Irma runtime environment.
This paper discusses ongoing research in the analysis of airborne hyperspectral imagery with application to cartographic feature extraction and surface material attribution. Preliminary results, based upon the processing and analysis of hyperspectral data acquired by the Naval Research Laboratory's (NRL) Hyperspectral Digital Imagery Collection Experiment (HYDICE) over Fort Hood, Texas in late 1995, are shown. Significant research issues in geopositioning, multisensor registration, spectral analysis, and surface material classification are discussed. The research goal is to measure the utility of hyperspectral imagery acquired with high spatial resolution (2 meter GSD) to support automated cartographic feature extraction. Our hypothesis is that the addition of a hyperspectral dataset, with spatial resolution comparable to panchromatic mapping imagery, enables opportunities to exploit the inherent spectral information of the hyperspectral imagery to aid in urban scene analysis for cartographic feature extraction and spatial database population. Test areas selected from the Fort Hood dataset will illustrate the process flow and serve to show current research results.
In previous work we presented a system, MULTIVIEW, used to generate 3D building hypotheses starting from sparse features, hypothesized building corners, extracted from multiple views of the scene. This technique results on knowledge about the imaging geometry and acquisition parameters to provide rigorous geometric constraints for the matching process. The effectiveness of this approach was demonstrated using complex aerial imagery taken with highly oblique views containing buildings with flat or peaked roofs. The MULTIVIEW system uses multiple views based on the successive incorporation of new image data into an existing partial solution. This approach allows the generation of data not seen in the initial images and increases the 3D positioning accuracy of derived object models by simultaneous solution of the collinearity equations. In this paper we provide a detailed performance analysis on the 3D buildings constructed by MULTIVIEW. We evaluate these results with respect to several issues including: (1) Metric accuracy of building recovery. (2) The ability to improve detection and delineation as the number of views are increased. (3) The effect of image processing order on building detection and delineation. The implications of incremental construction of detailed 3D structures are examined with respect to manually derived ground truth data.
Most systems for cartographic features extraction developed within the computer vision and image understanding community make little use of detailed camera information during object detection and delineation. For the most part the scale, size, and orientation of specific features are usually expressed in terms of image pixel size. Given the use of nadir and near-nadir mapping photography this has not severely impacted the development of several techniques at a variety of institutions for building detection, road network extraction, and other specific man-made objects. It is not too unfair to say that the inherent difficulties involved in achieving robust automated object detection and delineation have overshadowed any errors due to lack of rigor in the modeling of the image acquisition. In this paper we will develop several of these issues and discuss how the use of photogrammetric cues will play a major role in future systems for automated cartographic feature extraction.
Proceedings Volume Editor (3)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.