An advanced continuous wavelet transform algorithm for digital interferogram analysis and processing is proposed. The algorithm is an extension of the traditional wavelet transform; the mother wavelet and normalization parameter are selected based on the characteristics of optical interferograms. To reduce the processing time, a fast Fourier transform scheme is employed to implement the wavelet transform calculation. The algorithm is simple and is a robust tool for interferogram filtering and for whole-field fringe and phase information detection. The concept is verified by computer simulation and actual experimental interferogram analysis.
In this paper, a new feature extraction operator, the grating cell operator, is applied to analyze the texture features and classify different fonts of scanned document images. This operator is compared with the isotropic Gabor filter feature extractor which was also employed to classify fonts of documents. In order to improve the performance, a back-propagation neural network (BPNN) classifier is applied to the extracted features to perform the classification and compared with the simple weighted Euclidean distance (WED) classifier. Experimental results show that the grating cell operator performs better than the isotropic Gabor filter, and the BPNN classifier can provide more accurate classification results than the WED classifier.
In this paper, we compare the performance of three classifiers used to identify the script of words in scanned document images. In both training and testing, a Gabor filter is applied and 16 channels of features are extracted. Three classifiers (Support Vector Machines (SVM), Gaussian Mixture Model (GMM) and k-Nearest-Neighbor (k-NN)) are used to identify different scripts at the word level (glyphs separated by white space). These three classifiers are applied to a variety of bilingual dictionaries and their performance is compared. Experimental results show the capability of Gabor filter to capture script features and the effectiveness of these three classifiers for script identification at the word level.
In this paper, we present an approach to the bootstrap learning of a page segmentation model. The idea evolves from attempts to segment dictionaries that often have a consistent page structure, and is extended to the segmentation of more general structured documents. In cases of highly regular structure, the layout can be learned from examples of only a few pages. The system is first trained using a small number of samples, and a larger test set is processed based on the training result. After making corrections to a selected subset of the test set, these corrected samples are combined with the original training samples to generate bootstrap samples. The newly created samples are used to retrain the system, refine the learned features and resegment the test samples. This procedure is applied iteratively until the learned parameters are stable. Using this approach, we do not need to initially provide a large set of training samples. We have applied this segmentation to many structured documents such as dictionaries, phone books, spoken language transcripts, and obtained satisfying segmentation performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.