In recent years, 3D reconstruction technology, especially for mapping entire cities, has made great strides. This technology is crucial for detailed mapping and observation of cities. However, accurately capturing small objects like buildings from aerial images remains challenging. Traditional methods struggle to balance the entire city structure with fine details of building. A new technique, Neural Radiance Fields (NeRF), offers a way to create detailed scene views from set camera positions, but it is not efficient for large areas like cities. To solve this, we developed PatchNeRF. This method improves NeRF by focusing on specific areas of interest, allowing for more detailed and quicker results. PatchNeRF can repeatedly refine specific parts of a city model, like individual buildings, making it a big leap forward in creating detailed and efficient 3D city maps.
We explore an approach for vision-based GPS denied navigation of drones. We find SuperPoint/Superglue feature correspondences between two coplanar images: the drone image on the ground, and a satellite view of the flight area. The drone image is projected onto the ground using non-GPS data available to the drone, namely the compass and the barometer. Features on the drone image are projected back to the drone camera plane. Features on the satellite image are projected into 3D using a digital elevation map. The correspondences are then used to estimate the drone’s position. Drone coordinate estimates are evaluated against drone GPS metadata.
Shadows in aerial images can hinder the performance of various vision tasks, including object detection and tracking. Shadow detection networks see a reduction in performance in mid-altitude wide area motion imagery (WAMI) data since they lack the related data for training. Aerial WAMI data collection is a challenging task, and the variety of weather conditions that can be captured is limited. Moreover, obtaining accurate ground truth shadow masks for these images is difficult, where manual methods are infeasible and automatic techniques suffer from inaccuracies. We are leveraging the advanced rendering capabilities of Unreal Engine to produce city-scale synthetic aerial images. Unreal Engine can provide precise ground-truth shadow masks and cover diverse weather and lighting conditions. We further train and evaluate an existing shadow detection network with our synthetic data to improve the performance on real WAMI datasets.
KEYWORDS: Visualization, Sensors, 3D visualizations, LIDAR, Surveillance, Situational awareness sensors, Network architectures, Communication engineering, Environmental sensing, 3D modeling, Clouds, 3D image processing, Data modeling, Image compression, Visual process modeling, 3D image reconstruction, Reconstruction algorithms
We report progress toward the development of a compression schema suitable for use in the Army’s Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.
Our research focuses on the Army's need for improved detection and characterization of targets beneath the
forest canopy. By investigating the integration of canopy characteristics with emerging remote data collection
methods, foliage penetration-based target detection can be greatly improved. The objective of our research was
to empirically model the effect of pulse return frequency (PRF) and flight heading/orientation on the success of
foliage penetration (FOPEN) from LIDAR airborne sensors. By quantifying canopy structure and understory
light we were able to improve our predictions of the best possible airborne observation parameters (required
sensing modalities and geometries) for foliage penetration. Variations in canopy openness profoundly influenced
light patterns at the forest floor. Sunfleck patterns (brief periods of direct light) are analogous to potential
"LIDAR flecks" that reach the forest floor, creating a heterogeneous environment in the understory. This
research expounds on knowledge of canopy-specific characteristics to influence flight geometries for prediction of
the most efficient foliage penetrating orientation and heading of an airborne sensor.
Herein we purpose to utilize upconverting phosphors to detect explosives. To detect TNT, antibodies specific
to TNT are conjugated to the surface. The role of the antibodies is two fold; to bind a quencher and to accept
TNT. The quencher is a bifunctional molecule, with one end containing a TNT analog and the other end being
a dark fluorescent quenching dye. The dye is chosen so that the luminescence from the phosphor will be
absorbed preventing it from emitting, reducing luminescence from the phosphor. However, in the presence of
TNT the quencher that is bound with DNT will be displaced. With the quencher displaced the phosphor will be
able to emit light indicating TNT is present in the select area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.