PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, we present a new method for multitemporal SAR image filtering using 3D adaptive neighborhoods. The method takes into account both spatial and temporal information to derive the speckle-free value of a pixel. For each pixel individually, a 3D adaptive neighborhood is determined to contain only pixels coming from the same distribution as the current one. Then statistics computed inside the established neighborhood are used to derive the filter output. It is shown that the method provides good results by drastically reducing speckle over homogeneous areas while retaining edges and thin structures. The performances of the proposed method are compared in terms of subjective and objective measures with those given by several classical speckle filtering methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for image noise-removal based on local adaptive filtering is proposed in this paper. Three features to use into the local transform-domain filtering are suggested. First, filtering is performed on images corrupted not only by an additive white noise, but also by image-dependent (e.g. film-grain noise) or multiplicative noises. Second, a number of transforms is used instead of the single one, the resulting estimate is a linear combination of estimates from each of the transforms using local statistics. Third, these transforms are equipped with a varying adaptive window size for which we use the so-called intersection of confidence intervals (ICI) rule. Finally, we combine all the estimates for a pixel from neighboring windows by weighted averaging them. Comparison of the algorithm with known techniques for noise removal from images shows the advantage of the new approach, both quantitatively and visually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for the detection of the shoreline in remotely sensed images will be presented. The proposed technique is based on the involvement of the local contextual information, which is always present in remotely sensed data. In a previous version of the method a connectivity map was computed by exploiting the gray levels and the physical distance between pixels in the image. The results obtained by processing different kinds of remotely sensed data clearly show that it is possible to correctly detect the shoreline position only if the sea portion is really homogeneous. Problems arises when sources of non-homogeneity are present in the image. An improved version of the method has been implemented, which exploits texture features, instead of the simple gray level, in order to compute the connectivity map. By operating in this way it is possible to better take into account the spatial variability of data as an information source. It is important to underline that the described technique is intrinsically independent from the specific spatial resolution of data, then it applies to any kind of images. This means that the approach, though already useful for upgrading coastal databases or for cartographic applications, could be usefully employed for erosion or accretion monitoring and geomorphologic analysis when images characterized by a higher resolution will be available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article discusses cost reduction in the quality assessment of digital cartographic information by means of satellite images. It provides some generic comments on aspects involved in the quality of digital vector data following a schema in use by the USGS, and gives an example of how Landsat 7 data can be useful to assess the spatial accuracy of road networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper, a method for vectorization, matching and simplification of image contours in aerial stereovision. The goal is to compute a 3D reconstruction of the scene. The advantage of our method is that it only requires the extracted bitmap contours in one image of the couple. This is quite interesting since bitmap contours extraction often requires large computation times. Moreover, contours of same objects seen from different locations may be quite different, making very difficult a direct matching. Hence, our matching is done over image points whereas over contours. The matching of contour points is done with a correlation technique using the couple of images. Once this is done, the linearized contours are simplified by only keeping corresponding points which are geometrically significant. Finally, a set of stereo-vectors is obtained which can be used in a stereo-viewer or to compute three-dimensional reconstruction. The efficiency of this process is tested on a difficult example of a stereo couple of urban area with a wide angle between the two views. We show that the results are very satisfying in terms of relevance of the reconstructed vectors, speed of the process and direct extensibility to parallel computing for very large images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the work is to propose a methodology for spatial/spectral analysis of urban patterns using neural network. To address the problem of spectral ambiguity and spatial complexity related to built-up patterns a two-stage classification procedure based on Multi-Layer Perceptron, is proposed. The first stage is devoted to generate discriminating features for problematic patterns by a supervised soft classification It uses a moving window to evaluate the neighbouring influences during the classification. The spatial relationships among the window pixels to be classified are not explicitly formalised, but the corresponding window is directly presented as input to the neural network classifier. The generated features are used in the second stage for complete land cover mapping. For an experimental evaluation the strategy has been applied to the classification of natural colour aerial photographs acquired over heterogeneous landscape, including urban patterns, and characterised by high spatial resolution and low spectral information. The proposed methodology for the extraction of urban patterns proved to be accurate and robust besides transferable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks have been successfully used to classify pixels in remotely sensed images. Especially backpropagation neural networks have been used for this purpose. As is the case with all classification methods, the obtained classification accuracy is dependent on the amount of spectral overlap between classes. In this paper we study the new idea of using hierarchical neural networks to improve the classification accuracy. The basic idea is to use a first level network to classify the easy pixels and then use one or more second level networks for the more difficult pixels. First a rather standard backpropagation neural network is trained using the training pixels of a ground truth set. Two ideas to select the difficult pixels are tested. The first one is to take those pixels for which the value of the winning neuron is below a threshold value. The second one is to select pixels from output classes, which get a high contribution from wrong input classes. Both ideas improve on the percentage correctly classified pixels and on the average percentage correctly classified pixels per class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the application of Competitive Neural Networks (CNN) to the Unsupervised analysis of Remote Sensing Hyperspectral images. CNN are applied as clustering algorithms at the pixel level. We propose their use for the extraction of endmembers and evaluate them through the error induced by the compression/decompression with the CNN in the supervised classification of the images. We show results with the Self Organizing Map and Neural Gas applied to a well known case study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scaling issues are always playing a critical role in most studies based on remote sensing data. The process of getting quantitative scaling information from raw multi-resolution images is not trivial, and many aspects must be taken very carefully into consideration. To get a better picture about the role of spatial resolution, we conducted a series of flights in summer 1997, in several test sites over Spain and Portugal. In order to minimize the time of acquisition (to get minimal changes in atmospheric status and solar illumination) we used three flight altitude levels, that produced images with 1.25 m, 3 m and 12 m resolutions. The main steps in our methodology are: a) Geometrical registration of the multi-resolution dataset; b) Compensation of atmospheric effects; c) Compensation of angular view changes; d) Multi-resolution analysis. This work evaluates the importance of applying all steps thoroughly in order to achieve a fully comparable multi-resolution data set. Particularly BRDF effects have been commonly disregarded despite the big influence of these effects in apparent reflectance. Results obtained from two different test sites (La Mancha, in Spain, and Evora Natural Park, Portugal), with very different spatial characteristics, indicate the robustness of the approach, but also point out the importance of perturbing effects in getting actual multi-resolution information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-level, or pixel-based, fusion tends to use efficiently the information data acquired by sensors having different spatial and radiometric resolutions. Several methods have been proposed for merging panchromatic and multispectral data. Recently, multiresolution analysis has become one of the most promising methods for the analysis of remote sensing images. The use of the wavelet “à trous” algorithm allows to apply a dyadic wavelet to merge nondyadic data, by using a stationary or redundant transform, for which decimation is not carried out. The high-resolution coefficients of the image having high spatial resolution, may be added to the luminance component of the multispectral data. This procedure, namely AWL, which starts from a redundant bandpass representation of the image data, is very appealing, because the spectral quality is obviously highly preserved. However, it may suffer from a non-uniform spatial enhancement of the multispectral bands. In this paper, a method is proposed which uses spatial local information on the wavelet planes obtained from the “à trous” wavelet decomposition. The paper aims to improve the additive method by incorporating contextual spatial information to reduce possible overenhancement of the multispectral data in the image regions having higher spectral content, e.g., in highly textured regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The continuous growth of data has created a demand for better data fusion algorithms. In this study we have used a method called Bayesian networks to answer the demand. The reason why Bayesian networks are used in wide range of applications is that modelling with Bayesian networks offers easy and straightforward representation for combining a priori knowledge with the observations. Another reason for growing use of the Bayesian networks is that Bayesian networks can combine attributes having different dimensions. In addition to the quite well-known theory of discrete and continuous Bayesian networks, we introduce a reasoning scheme to the hybrid Bayesian networks. The reasoning method used is based on polytree algorithm. Our aim is to show how to apply the hybrid Bayesian networks to identification. Also one method to achieve dynamic features is discussed. We have simulated dynamic hybrid Bayesian networks in order to identify aircraft in noisy environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classifier fusion approaches are receiving increasing attention for their capability of improving classification performances. At present, the usual operation mechanism for classifier fusion is the “combination” of classifier outputs. Improvements in performances are related to the degree of “error diversity” among combined classifiers. Unfortunately, in remote-sensing image recognition applications, it may be difficult to design an ensemble that exhibit an high degree of error diversity. Recently, some researchers have pointed out the potentialities of “dynamic classifier selection” (DCS) as an alternative operation mechanism. DCS techniques are based on a function that selects the most appropriate classifier for each input pattern. The assumption of uncorrelated errors is not necessary for DCS because an “optimal” classifier selector always selects the most appropriate classifier for each test pattern. The potentialities of DCS have been motivated so far by experimental results on ensemble of classifiers trained using the same feature set. In this paper, we present an approach to multisensor remote-sensing image classification based on DCS. A selection function is presented aimed at choosing among classifiers created using different feature sets. The experimental results obtained in the classification of remote-sensing images and comparisons with different combination methods are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology based on mathematical morphology operators to classify vegetation cover types in remotely sensed images (ortho and satellite) is proposed in this paper. It consists on the automatic creation of the training sets by integrating the data extracted at higher spatial resolution with the corresponding data at higher spectral resolution, on the geometrical modelling of these sets to create a decision region for each class and on the automatic definition of the elementary units to be classified. The proposed approach is tested and illustrated with remotely sensed images from a region in centre Portugal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method has been recently proposed that provides a hierarchical solution to the clustering problem under very general assumptions, relying on the cooperative behavior of an inhomogeneous lattice of chaotic coupled maps. The physical system can be seen as a chaotic neural network where neurons update is performed by logistic maps. The mutual information between couples of map acts as a similarity index to get partitions of a data set, corresponding to different resolution levels. As a result a full hierarchy of clusters is generated. Esperiments on artificial and real-life problems show the effectiveness of the proposed algorithm. Here we report the results of an application to landmine detection by dynamic thermography. Dynamic thermography allows to discriminate among objects with different thermal properties by sequential IR imaging. Detection is then obtained through segmentation of temporal sequences of infrared images. An approach is proposed that gives the correct classification by analysing very short image sequence, thus allowing a fast acquisition time. The algorithm has been successfully tested on image sequences of plastic anti-personnel mines taken from realistic minefields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel automatic approach to the unsupervised detection of changes in a pair of remote-sensing images acquired on the same geographical area at different times is presented. The proposed approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, we propose an iterative non-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in the difference image generated by the comparison of the two images. Such a technique exploits the effectiveness of two theoretically well-founded estimation procedures: the reducedparzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, on the basis of the resulting non-parametric estimates, a markov random field (MRF) approach is used for modeling the spatial-contextual information contained in the multitemporal images considered. The non-parametric nature of the proposed method allows its application to different kind of remote-sensing images (e.g., SAR and optical images). Experimental results, obtained on a set of multitemporal remotesensing images, confirm the effectiveness of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method of kernel density estimation with a varying adaptive window width. This method is different from traditional ones in two aspects. First, we use symmetric as well as nonsymmetric left and right kernels with discontinuities and show that the fusion of these estimates results in accuracy improvement. Second, we develop estimates with adaptive varying window widths based on the so-called intersection of confidence intervals (ICI) rule. Several examples of the proposed method are given for different types of densities and the quality of the adaptive density estimate is assessed by means of numerical simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to apply the statistical approach to the classification of multisensor remote sensing data, one of the main problems lies in the estimation of the joint probability density functions (pdfs) f(X|?k) of the data vector X given each class ?k, due to the difficulty of defining a common statistical model for such heterogeneous data. A possible solution is to adopt non-parametric approaches which rely on the availability of training samples without any assumption about the statistical distributions involved. However, as the multisensor aspect involves generally numerous channels, small training sets make difficult a direct implementation of non-parametric pdf estimation. In this paper, the suitability of the concept of dependence tree for the integration of multisensor information through pdf estimation is investigated. First, this concept, introduced by Chow and Liu, is used to provide an approximation of a pdf defined in an N-dimensional space by a product of N-1 pdfs defined in two-dimensional spaces, representing in terms of graph theoretical interpretation a tree of dependencies. For each land cover class, a dependence tree is generated by minimizing an appropriate closeness measure. Then, a non-parametric estimation of the second order pdfs f(xjxj,?k) is carried out through the Parzen approach, based on the implementation of two-dimensional Gaussian kernels. In this way, it is possible to reduce the complexity of the estimation, while capturing a significant part of the interdependence among variables. A comparative study with two other non-parametric multisensor data fusion methods, namely: the Multilayer Perceptron (MLP) and K-nearest neighbors (K-nn) methods, is reported. Experimental results carried out on a multisensor (ATM and SAR) data set show the interesting performances of the fusion method based on dependence trees with the advantage of a reduced computational cost with respect to the two other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decade, the application of statistical and neural network classifiers to remote-sensing images has been deeply investigated. Therefore, performances, characteristics, and pros and cons of such classifiers are quite well known, even from remote-sensing practitioners. In this paper, we present the application to remote-sensing image classification of a new pattern recognition technique recently introduced within the framework of the Statistical Learning Theory developed by V. Vapnik and his co-workers, namely, the Support Vector Machines (SVMs). In section 1, the main theoretical foundations of SVMs are presented. In section 2, experiments carried out on a data set of multisensor remote-sensing images are described, with particular emphasis on the design and training phase of a SVM. In section 3, the experimental results are reported, together with a comparison between the performances of SVMs, neural network, and k-NN classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Intensity Hue Saturation (IHS) transform is a widely used method to enhance the spatial resolution of multispectral images by substituting the Intensity component by the high resolution of the panchromatic image. However, such a direct substitution introduces important modifications on spectral properties. A more rigorous approach should consist in enhancing the spatial resolution of the intensity component through an appropriate combination with the panchromatic image. Such a combination is performed in the redundant wavelet domain by using a fusion model. SPOT images are used to illustrate the superiority of our approach compared to the IHS method for preserving spectral properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a rigorous mathematical formulation of the correspondence technique given in1 and inspired from the data fusion methods, a new approach to detect the robust estimated number of clusters is proposed in this paper. The idea is to make a correspondence between clusters of different classification results obtained with different numbers of clusters, which are superior or equal to the number of land-cover classes. To formulate this idea in a rigorous mathematical framework, we consider the classification results as classifiers we want to combine to obtain the more precise classification result. The combination procedure used is inspired from the recent development in artificial intelligence methods of classifiers combination. Since the Bayesian method uses more information on classifiers in the combination of their results; we have adopted this method in the elaboration of our classifiers combination approach. We demonstrate our methodology by classifying real SAR data provided by the SIR-C sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A description of an approach to primary local image recognition is given. The motivation for its application and its characteristics are discussed. Then a method for correction of misclassifications that occur in primary local image recognition is proposed. This method uses a graph-based estimation technique that uses information contained in supplementary classes in order to remove misclassifications and/or confirm the correct recognition of pixel hypotheses. In addition, the method is able to remove the supplementary classes after they are no longer needed. The particular features of the considered approach are that it is iterative and uses structures similar to those of center weighted median filters. The numerical simulation results are presented to illustrate the efficiency of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the domain of multispectral image processing, two approaches can be considered: the scalar processing scheme and the vectorial processing scheme. This paper presents a method that belongs to the first approach with a specific selection of the relevant bands of the image. The method is achieved in four steps. The first step is devoted to the elimination of redundant observations by maximizing an entropy criterion. The selected bands are then filtered according to the degradation affecting them. In the third step, each of the filtered bands is segmented using a technique of histogram multi-thresholding. In the last step, a fusion by a combination of the selected bands results allows to obtain the final classification. This scheme is illustrated in the frame of an application in multispectral imagery acquired by the Compact Airborne Spectrographic Imager (CASI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A grayscale morphological filter has been developed for small target detection in TV tracking applications. Applying morphological open and close respectively, utilizing the property of morphological filter and subtracting the opening result from the closing result pixel by pixel, the algorithm exhibits perfect results in detecting small target in TV images. Represented by the stereo description of images, the performance of the algorithm could be seen intuitively in Section 4. Real TV images with sky, trees as background are tested, and the detection result is satisfactory. Furthermore, this algorithm has the potential of parallel processing, and can solve the problem of polarity reversion in TV tracking applications, which is of great importance in real-time implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper has systematically studied data fusion technology based on D-S evidence theory, and analyzed the method to construct BPA of sensor. Detection fusion has been done for several situations based on dual-sensor system, and improvement of data fusion on ROC curves has been simulated. We have processed the small target sequential images created by infrared scene generator and reached the expected results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geographic Information System (GIS) are often old and so, some geographic elements are not represented. From satellite images and/or aerial images, we can detect cartographic elements to integrate them in the GIS and then upgrade it. Making it manually is a very long and tedious work, so computer based methods are needed. This paper presents several specific and automatic or semi-automatic methods to detect and identify several types of cartographic elements. These methods are fast and very efficient. A result evaluation is given to permit a manually correction for the non-confident elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new land cover classification methodology is proposed in this report. The idea is based on the following assumption; a land cover category is composed of several land cover elements and is identified by texture of these elements. Land cover elements can be extracted by clustering of the target image data. The texture can be measured by co-occurrence matrix for the extracted land cover elements. The three-layered feed forward neural network driven by the co-occurrence matrix is utilized as a classifier in the proposed method. In this study, the seven clustering methods and the number of land cover elements (16, 32, 64, 128, 256) were evaluated. As the result, the non-hierarchical disperse cluster split methods and 128 land cover elements showed the best classification accuracy. The proposed method showed the 3%, 14%, 22%, 24% and 39% higher classification accuracy than neural network classifiers driven by co-occurrence matrix for pixel value in local area, texture features (vector) extracted co-occurrence matrix for pixel value, pixel values (spectral vector) of a single pixel, pixel values of 3*3 pixels and a conventional maximum likelihood pixel wise classifier, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose in this communication an intrinsic validation method in order to improve the global segmentation results. The originality of this method lies with the possibility to automatically validate segmentation results at two levels: regions and pixels. This method is based on three different and complementary intrinsic quality measures: '’topology”, homogeneity and stability. These three measures are computed from image and segmented image without any a priori knowledge on the ground truth. For each measure, a criterion is applied in order to validate the segmentation results. The developed method has been applied on the high-resolution multi-component images obtained by the CASI (Compact Airborne Spectrographic Imager). The application of this method on CASI images shows its efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of automatic object extraction from imagery and GIS for the update of geodata becomes more and more important. The most obvious reasons for this are (1) in many countries large geodata bases were built up in the last decade which have to be kept up to date and (2) the progress in automatic object extraction will lead to practical application. In this paper we describe a method for the automatic extraction of forest, settlement, roads, and trees. The individual approaches are combined such that the underlying new model contains global and local context knowledge and is in compliance with the GIS data catalogue. The results of the automatic object extraction process are compared with the existing GIS objects. For this task we apply a method, which allows to model the positional and abstraction uncertainty of the object’s borders stochastically and leads to a probability-based decision of topological relations between area objects. By means of examples we demonstrate that the proposed method is useful for database update and quality description.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our aim is to build a digital elevation model (DEM) for the basin of Rega River, a tributary of the Baltic Sea, on a 0.5 x 0.5 m grid. It is based on hand-drawn topographical maps in 1:10,000 scale scanned with 508 dpi accuracy. Then a digital terrain model (DTM) results from integration of DEM with remotely sensed data (space and airborne images) and detailed geodata. In this paper, we describe algorithms for noise removal, thinning and continuing contour lines, and interpolation of elevation data used to process the topographical maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of identifying terrains in Landsat-TM images on the basis of non-uniformly distributed labeled data is discussed in this paper. Our approach is based on the use of neural network classifiers that learn to predict posterior class probabilities. Principal Component Analysis (PCA) is used to extract features from spectral and contextual information. The proposed scheme obtains lower error rates that other model-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper it is presented a methodology to calculate the surface temperature (ST) from the combination of the radiometric temperature in two different DAIS (Digital Airborne Imaging Spectrometer) thermal bands using split-window (sw) method. To get this objective the MODTRAN 3.5 radiative transfer code was used to predict radiance for DAIS channels 74 (8.75 µm), 75 (9.65 µm), 76 (10.48 µm), 77 (1 1.27 µm), 78 (12.00 µm) and 79 (12.67 µm) at different aircraft altitudes with the appropriate channel filter functions. In order to analyse atmospheric effects a set of radiosoundings that cover the variability of surface temperature and water vapour concentration on a world-wide scale was used. Once the algorithm has been obtained, an application to DAIS images obtained in Colmar (France) and Barrax (Spain) during the DAISEX'98 and '99 ( Data Airborne Imaging Spectrometer Experiment ) campaigns framework has been made. Finally a comparison between the surface temperature obtained from DAIS data, previously corrected from the atmospheric and emissivity effect, and the simultaneous in-situ measurements, is included. The results show that the proposed theoretical sw are able to produce land surface temperature with a standard deviation lower than 1 K. This is in good agreement with the validation results obtained from DAISEX campaign.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vector quantization (VQ) is an attractive image compression technique. VQ utilizes the high correlations between neighboring pixels in a block, but disregards the high correlations between the adjacent blocks. Unlike VQ, Side match VQ (SMVQ) exploits codeword information of two encoded adjacent blocks, the upper and left blocks, to encode the current input vector. However, SMVQ doesn’t consider edge characteristics of the current input vector and its neighboring vectors at all. Variable-rate SMVQ has been proposed in the literature that exploits a block classifier to decide which class the input vector belongs to using the variances of the upper and left codewords. However, this block classifier didn’t take the variance of the current input vector itself into account. Based on this, variable-rate SMVQ with a new block classifier called new CSMVQ is proposed. This classifier uses the variance of the input vector together with variances of its neighboring encoded blocks to encode the input vector. Experimental results show that new CSMVQ can obtain lower bit rate than VQ and old CSMVQ. Moreover, new CSMVQ can obtain higher image quality than SMVQ, old CSMVQ and VQ. In addition, new CSMVQ needs shorter encoding time than old CSMVQ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the problem of detecting airborne targets in a sequence of images recorded by a long range InfraRed (IR) sensor. The target appears in the IR images as a small, weak signal embedded in a strong background clutter. It is assumed that the target’s amplitude, velocity and position are unknown parameters. To accommodate the unknown parameters the Generalized Likelihood Ratio Test (GLRT) detector is derived. The detector structure and its actual implementation are discussed in detail. To test the detection algorithm an experiment involving a cooperating aircraft has been performed. The preliminary results obtained on this set of experimental data are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to move towards a more adequate classification methodology one issue that has received particular attention within the remote sensing community is the development of soft classification models in alternative to conventional hard classification techniques. In soft classification, pattern indeterminacy must be connected with different forms of uncertainty such as vagueness, ambiguity, resulting in gradual strength of membership to classes. The work is focused on the use of soft classification techniques for production of soft maps in which grades of membership to classes are the final, meaningful outputs. When soft land cover maps are generated, grades of membership are correlated to the percentages of coverage; when maps specifying more abstract themes are generated grades have to represent the human natural approximation with which patterns matches with cognitive categories. Despite the availability of several soft classification techniques, soft thematic mapping has not being very often employed and the majority of classifications are still based on hard paradigms and maps are presented in discrete form. Significant problems in the use of these techniques limit their diffusion. The aim of this paper is to analyze the above limitations in an attempt of contributing to their overcoming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper deals with the compression of multi-spectral satellite image data (high resolution data consisting of radiances and top-of—the-atmosphere TOA fluxes) investigated within the framework of the EUMETSAT project of SAP (Satellite Application Facility) on Climate Monitoring. Full multifunctionality support (quality scalability, resolution scalability, region-of-interest access) is asked for and image calibration characteristics (luminance, radiance) must be preserved within certain limits for lossy image compression, together with an excellent image quality. We analyze state-of-the-art coding techniques with respect to these requirements. Our objective is to answer two questions, namely the capability of existing state-of-the-art compression techniques to comply with our image calibration characteristics requirements and the support they offer regarding multifunctionality (in terms of quality scalability, resolution scalability and region of interest access). We propose basic modifications of these techniques so as to meet our multifunctionality requirements. We conclude from the experimental assessment of the analyzed techniques that none of the top coding algorithms available to date fully satisfy our imposed rate-distortion constraints and propose a path for future research in this field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of selecting an appropriate wavelet filter is always present in signal compression based on the wavelet transform. In this report, we give a method to select a wavelet filter for multispectral image compression. The wavelet filter selection is based on the Learning Vector Quantization (LVQ). In the training phase for the test images, the best wavelet filter has been found by a careful compression-decompression evaluation. Certain spectral features are used in characterizing the pixel spectra. The LVQ is used to form the best wavelet filter class for different types of spectral images. When a new image is to be compressed, a set of spectra from that image is selected, the spectra are classified by the trained LVQ and the filter associated to the largest class is selected for the compression of the whole multispectral image. The results show, that our method finds the most suitable wavelet filter for compression of multispectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lifting has been recognised as an effective numerical technique to realise linear transformations to digital data in integer-to-integer form, which guarantees perfect reversibility. When applied to decorrelate digital hyperspectral images in the spectral and spatial domains, lifting can be applied to accomplish lossless data compression. Spectral pairwise principal component analysis (PPCA) and spatial wavelet transforms have been combined to demonstrate data compression of digital hyperspectral images acquired by the AVIRIS instrument, and in both transforms lifting has been applied to realise an efficient algorithm, suitable for on-board implementation in a spacebome imaging spectrometer. The cascaded spectral PPCA algorithm produces a large number of noisy images, which subsequently are compressed using a general purpose Lempel-Ziv coder. The resulting signal images are spatially decorrelated using a wavelet transform, and an embedded zerotree encoder (EZT) is applied to achieve data compression for these. Uniform linear quantisation of the spectrally and spatially decorrelated data is applied to allow for quasi-lossless compression, in which case a higher compression ratio is obtained. The overall compression factors obtained for 16-bit AVIRIS data from two scenes vary from about two for lossless compression to four for quasi-lossless compression with an rms error of 2% of the input standard deviation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper several methods for image lossy compression are compared in order to find adaptive schemes that may improve compression performance for hyperspectral images under a classification accuracy constraint. Our goal is to achieve high compression ratios without degrading classification accuracy too much for a given classifier. Lossy compression methods such as JPEG, three-dimensional JPEG, a tree structured vector quantizer, a zero- tree wavelet encoder, and a lattice vector quantizer have been used to compress the image before the classification stage. Classification is carried out through classification trees. Two kinds of classification trees are compared: one- stage trees, which classify the input image using only a single classification stage; and multi-stage trees, which use a mixed class that delays the classification of problematic pixels for which the accuracy achieved in the current stage is not enough. Our experiments indicate that is is possible to achieve high compression ratios while maintaining the classification accuracy. It is also shown that compression methods that take advantage of the high band correlation of hyperspectral images provide better results and become more flexible for a real case scenario. As compared to one-stage trees, the employment of multi-stage trees increases the classification accuracy and reduces the classification cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, near-lossless compression, i.e., yielding strictly bounded reconstruction error, is proposed for high-quality compression of remote sensing images. First, a classified causal DP CM scheme is presented for optical data, either multi/hyperspectral (3D), or panchromatic (2D) observations. It is based on a classified linear-regression prediction, followed by context-based arithmetic coding of the outcome prediction errors, and provides excellent performances, both for reversible and for irreversible, i.e., near-lossless, compression. Coding time are affordable thanks to fast convergence of training. Decoding is always performed in real time. Then, an original approach to near-lossless compression of SAR images that is based on the Rational Laplacian Pyramid (RLP) is presented. The baseband icon of the RLP is DPCM encoded, the intermediate layers are uniformly quantized, and the bottom layer is is logarithmically quantized. As a consequence, the relative error, i.e., pixel ratio of original to decoded image, can be strictly bounded by the quantization step size of the bottom layer of RLP. The step sizes on the other layers are chosen as minimizing the bit rate for a given distortion, by exploiting the quantization noise feedback loops at the encoder. In both cases, if the reconstruction errors fall within the boundaries of the noise distributions, either digitization noise, or speckle, the decoded images will be virtually lossless, even though their encoding is not strictly reversible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There have been many approaches to the extraction of roads. Even though the complete automatic interpretation of aerial or satellite images is still remote, it is possible to obtain sound results from some images under some conditions. In this work we will show the importance of texture and second order statistics in the recognition of roads from satellite and aerial images. Since this type of images is in general registered, the images can be combine with other information from a GIS. In this work vector layers for roads networks are used in combination with raster aerial or satellite images. Several results with high-resolution satellite and aerial images are presented. Shadows and other obstacles caused some mistakes and they present a problem that remains to be tackled. Despite all this, the importance of texture for the extraction of roads is proven. Future work toward a complete automation introducing new information layers from a GIS is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.