An automatic approach for detecting bridges over water from light detection and ranging (LiDAR) data based on adaptive morphological filter and skeleton extraction is presented. It is inspired by data-driven and inference-based methods in machine learning. First, the three-dimensional characteristics of LiDAR data are considered in our algorithm. We design an adaptive morphological filter to classify the data into two classes, ground points and nonground points. Second, the elevation feature is used to extract the river. In this way, the search space can be greatly reduced. Third, the river is represented as a skeleton line by the morphological thinning algorithm. This concise representation makes the proposed approach more efficient to detect bridges. Finally, we propose the shortest distance rule based on the skeleton line. The fusion of the classification map and the rule is used to detect bridges. The flexibility of the proposed method is demonstrated by experiments on several different scenes. The experimental results show that the proposed approach has good performance in detecting a bridge over water.
In this paper, we introduce a new image fusion method based on the contourlet transform. Firstly, the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing is analyzed. Secondly, the principal of Contourlet and its good performance in expressing the singularity of two or higher dimensional are studied. Finally, the feasibility of image fusion using contourlet transform is discussed in detail. A new fusion method based on Contourlet transform and the fusion framework are proposed. The transform coefficients structure and the fusion procedure are given in detail in this paper. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid methods do in image fusion.
KEYWORDS: Image fusion, Neurons, Medical imaging, Human vision and color perception, Image processing, Image segmentation, Image enhancement, Wavelet transforms, Neural networks, Digital image processing
The proposed new fusion algorithm is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron is the same and its value is chosen through experimentation, this algorithm uses the contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do image fusion.
In this paper, a novel image fusion method based on the finite ridgelet transform(FRIT). Firstly, the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing is analyzed. Secondly, the principal of FRIT and its good performance in expressing the singularity of two or higher dimensional are studied. Finally, the feasibility of image fusion using FRIT is discussed in detail. A new fusion method based on FRIT and the fusion framework are proposed. The transform coefficients structure and the fusion procedure are given in detail in this paper. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid methods do in image fusion.
KEYWORDS: Image fusion, Imaging systems, Image processing, Point spread functions, Clocks, Wavelet transforms, Cameras, Digital cameras, Digital imaging, Information fusion
This paper deals with a new multi-focus image fusion algorithm, which is on the basis of the Ratio of Blurred and Original Image Intensities. The definition of sharpness based on the sum of square of gray-level gradient vector magnitude is chosen. By analyzing the imaging model of a geometry optical system and the effect of PSF (point spread function, PSF), a simulated second imaging model of an optical system is proposed. After the second imaging, the clear object in an image will be blurred and the blurry object will be more blurred. The clear object is decided by the comparison of the difference in sharpness of each pixel, between the two different focus images and their second imaging images. In this case, the clear objects of each original image are decided automatically, and then all of them are merged into a new clear image. Experiments show that the proposed algorithm works better in preserving edge and texture information than the other image fusion methods mentioned in multi-focus image fusion do.
A novel adaptive multi-focus image fusion algorithm is given in this paper, which is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of the multi-focus image and the properties of visual imaging. Compared with the traditional algorithm where the linking strength, βij, of each neuron in the PCNN model is the same and its value is chosen through experimentation, this algorithm uses the clarity of each pixel of the image as its value, so that the linking strength of each pixel can be chosen adaptively. A fused image is produced by processing through the compare-select operator the objects of each firing mapping image taking part in image fusion, deciding in which image the clear parts is and choosing the clear parts in the image fusion process. By this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in the PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do in multi-focus image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.