We have developed a system that applies deep learning application super-resolution (SR) to multispectral and hyperspectral geospatial satellite imagery to deduce higher resolution images from lower resolution images while maintaining the original color of the lower resolution pixels. A super-resolution model, which uses Deep Convolution Neural Networks (DCNNs), is trained using individual image bands, a large crop size or tile size of 512 × 512 pixels, and a de-noise algorithm. Applying our algorithms to maintain the original color of the image bands improves the quality metrics of the super-resolution images as measured by peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of super-resolution images. One of the most important applications of satellite images is to automatically detect small objects such as vehicles and small boats. With super-resolution images generated by our system, the object detection accuracy (recall and precision) has improved by 20% with Planet® multispectral satellite images.
We explore the application of single image super-resolution technique to satellite image and its effect on object detection performance. This technique uses a deep convolutional neural network to learn transformations between different zoom levels of image pyramids, also referred to as Resolution Set (Rset). The network can learn the transformations from the 2:1 RSet at a Ground Sample Distance (GSD) of 60cm to the full resolution image at a GSD of 30cm by minimizing the differences between ground-truth full resolution and the derived 2x zoom. After training, the learned transformation is applied to the 1:1 full resolution image transforming the pixels to 2x resolution. The learned transformations has intelligence built in and can infer higher resolution images. We find super-resolution images significantly improve object detection accuracy, improve manual feature extraction accuracy, and also benefit imagery analysis workflows and derived products which use satellite images.
Digital Elevation Model (DEM) production is one of the most time consuming tasks in digital photogrammetry. By applying machine learning to Digital Photogrammetry, our Intelligent Photogrammetry can significantly reduce the cost of DEM production from Digital Surface Models (DSM), which are generated from satellite images, aerial images, Unmanned Aerial Vehicle (UAV) images, sparse LiDAR 3-D point clouds, and dense LiDAR 3-D point clouds. There are various types of DSM, each containing different post spacing and accuracy. The following sets of 3-D models have been trained based on the different DSM types: 1. 3DLargeBuildingModel 2. 3DBuildingModel 3. 3DHouseModel 4. 3DTreeModel 5. 3DGroundPointModel The first four models detect above ground 3-D objects and then remove them from DSM to generate DEM. The last model classifies 3-D points into thirteen categories, which are then used to generate DEM in difficult terrain such as dense forestry areas, where the ground is mostly unseen. The main cost of DEM production using DSM generated from satellite images in difficult terrain is the transformation from DSM to DEM. Traditional handcrafted bare earth algorithms for DSM to DEM transformation cannot deal with so many different cases for general purpose application and big data. Intelligent Photogrammetry, based on machine learning, can handle different cases by adding training samples. For this case study, the city of San Diego was used to generate DEM from Intelligent Photogrammetry to achieve Root Mean Square Error (RMSE) of 0.95 meters from stereo satellite images. This case study indicates that Intelligent Photogrammetry can reduce the DEM production cost by more than 50%. The most time consuming component of DEM production is dense forestry areas, and in this case study, the forestry height is up to 19 meters causing the ground to be nearly invisible. This issue was resolved with 3DGroundPointModel based on machine learning, achieving RMSE 2.40 meter and meeting the desired DEM accuracy requirement of 2 to 3 meters using stereo satellite images. DEM production from UAV images using our Intelligent Photogrammetry can achieve state-of-the-art accuracy. The Intelligent Photogrammetry can identify errors in DSM generated from UAV images and correct them; therefore, providing a very competitive DEM generation capability for UAV images.
We propose a Double Convolutional Neural Networks (D2CNN) framework for automatic target detection. D2CNN achieved high speed and high positional accuracy on our high-altitude imagery dataset. Translation invariance in convolutional neural network (CNN) is a double-edged sword. A CNN with large translation invariance is fast, but suffers positional accuracy, which is critical for automatic target detection. A CNN with small translation invariance can achieve high positional accuracy at the expense of speed. In a typical target detection case, targets are very sparse. In our D2CNN framework we employ two separate CNN. The first with large translation invariance generates region proposals. The second CNN with small translation invariance detects targets with high positional accuracy in the proposed regions by the first CNN. The two CNN are trained separately using different training strategies. Training examples are shared between the two CNN. However, data augmentation algorithms are very different. For the first CNN data augmentation, we place an object at various locations such as the center, the lower left portion, the up-right portion. For the second CNN data augmentation, an object/target is always centered. We fine tune hyper-parameters of pooling and convolution layers to increase translation invariance for the first CNN, and decrease translation invariance for the second CNN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.