This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.
The six-stage Frisén scale is a qualitative and subjective method for assessing papilledema (optic disc swelling due to raised intracranial pressure) using fundus photographs. The recent introduction of spectral-domain optical coherence tomography (SD-OCT) presents a promising alternative to enable the 3-D quantitative estimation of papilledema. In this work, we propose an automated region-based volumetric estimation of the degree of papilledema from SD-OCT. After using a custom graph-based approach to segment the surfaces of the swollen optic nerve head, the volumes of the nasal, superior, temporal, and inferior regions are computed. Using a dataset of 70 SD-OCT optic-nerve-head (ONH) SD-OCT scans the Spearman rank correlation coefficients between expert-defined Frisén scale grades and the total retinal (TR) volume, nasal, superior, temporal, inferior regional volumes were 0.737, 0.752, 0.747, 0.770 and 0.758, respectively. Also, a fuzzy k-nearest-neighbor (k-NN) algorithm was used to predict Frisén scale grades (in a leave-one-subject-out fashion). Using multiple features rather than just the TR volume made the resulting mean Frisén grade difference (MGD) between the expert-defined grades 0.386 (down from 0.629) and prediction accuracy 64.29% (up from 41.43%).
Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the
diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma
progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more
recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been
reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a
multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs
and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the
retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from
both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or
background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion
(by subject). A significant improvement in classification accuracy is obtained using the multimodal approach
over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.