In ophthalmology, various modalities and tests are utilized to obtain vital information on the eye’s structure and function. For example, optical coherence tomography (OCT) is utilized to diagnose, screen, and aid treatment of eye diseases like macular degeneration or glaucoma. Such data are complemented by photographic retinal fundus images and functional tests on the visual field. DICOM isn’t widely used yet, though, and frequently images are encoded in proprietary formats. The eXtensible Neuroimaging Archive Tool (XNAT) is an open-source NIH-funded framework for research PACS and is in use at the University of Iowa for neurological research applications. Its use for ophthalmology was hence desirable but posed new challenges due to data types thus far not considered and the lack of standardized formats. We developed custom tools for data types not natively recognized by XNAT itself using XNAT’s low-level REST API. Vendor-provided tools can be included as necessary to convert proprietary data sets into valid DICOM. Clients can access the data in a standardized format while still retaining the original format if needed by specific analysis tools. With respective project-specific permissions, results like segmentations or quantitative evaluations can be stored as additional resources to previously uploaded datasets. Applications can use our abstract-level Python or C/C++ API to communicate with the XNAT instance. This paper describes concepts and details of the designed upload script templates, which can be customized to the needs of specific projects, and the novel client-side communication API which allows integration into new or existing research applications.
Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the
diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma
progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more
recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been
reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a
multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs
and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the
retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from
both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or
background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion
(by subject). A significant improvement in classification accuracy is obtained using the multimodal approach
over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).
The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high
resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases
such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is
a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D
registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using
an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained
from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements
of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based
method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to
match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between
matching A-scans in both images at different translations. We have applied this method to the registration of
Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with
glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17
OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's
robustness.
Parameters extracted from the vasculature on the retina are correlated with various conditions such as diabetic retinopathy and cardiovascular diseases such as stroke. Segmentation of the vasculature on the retina has been a topic that has received much attention in the literature over the past decade. Analysis of the segmentation result, however, has only received limited attention with most works describing methods to accurately measure the width of the vessels. Analyzing the connectedness of the vascular network is an important step towards the characterization of the complete vascular tree. The retinal vascular tree, from an image interpretation point of view, originates at the optic disc and spreads out over the retina. The tree bifurcates and the vessels also cross each other. The points where this happens form the key to determining the connectedness of the complete tree. We present a supervised method to detect the bifurcations and crossing points of the vasculature of the retina. The method uses features extracted from the vasculature as well as the image in a location regression approach to find those locations of the segmented vascular tree where the bifurcation or crossing occurs (from here, POI, points of interest). We evaluate the method on the publicly available DRIVE database in which an ophthalmologist has marked the POI.
Segmenting vessels in spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging
in the region near and inside the neural canal opening (NCO). Furthermore, accurately segmenting them
in color fundus photographs also presents a challenge near the projected NCO. However, both modalities also
provide complementary information to help indicate vessels, such as a better NCO contrast from the NCO-aimed
OCT projection image and a better vessel contrast inside the NCO from fundus photographs. We thus present
a novel multimodal automated classification approach for simultaneously segmenting vessels in SD-OCT volumes
and fundus photographs, with a particular focus on better segmenting vessels near and inside the NCO
by using a combination of their complementary features. In particular, in each SD-OCT volume, the algorithm
pre-segments the NCO using a graph-theoretic approach and then applies oriented Gabor wavelets with oriented
NCO-based templates to generate OCT image features. After fundus-to-OCT registration, the fundus image
features are computed using Gaussian filter banks and combined with OCT image features. A k-NN classifier is
trained on 5 and tested on 10 randomly chosen independent image pairs of SD-OCT volumes and fundus images
from 15 subjects with glaucoma. Using ROC analysis, we demonstrate an improvement over two closest previous
works performed in single modal SD-OCT volumes with an area under the curve (AUC) of 0.87 (0.81 for our
and 0.72 for Niemeijer's single modal approach) in the region around the NCO and 0.90 outside the NCO (0.84
for our and 0.81 for Niemeijer's single modal approach).
Optical coherence tomography (OCT), being a noninvasive imaging modality, has begun to find vast use in
the diagnosis and management of ocular diseases such as glaucoma, where the retinal nerve fiber layer (RNFL)
has been known to thin. Furthermore, the recent availability of the considerably larger volumetric data with
spectral-domain OCT has increased the need for new processing techniques. In this paper, we present an
automated 3-D graph-theoretic approach for the segmentation of 7 surfaces (6 layers) of the retina from 3-D
spectral-domain OCT images centered on the optic nerve head (ONH). The multiple surfaces are detected
simultaneously through the computation of a minimum-cost closed set in a vertex-weighted graph constructed
using edge/regional information, and subject to a priori determined varying surface interaction and smoothness
constraints. The method also addresses the challenges posed by presence of the large blood vessels and the optic
disc. The algorithm was compared to the average manual tracings of two observers on a total of 15 volumetric
scans, and the border positioning error was found to be 7.25 ± 1.08 μm and 8.94 ± 3.76 μm for the normal and
glaucomatous eyes, respectively. The RNFL thickness was also computed for 26 normal and 70 glaucomatous
scans where the glaucomatous eyes showed a significant thinning (p < 0.01, mean thickness 73.7 ± 32.7 μm in
normal eyes versus 60.4 ± 25.2 μm in glaucomatous eyes).
Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular
diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity
(ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been
performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography
(OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our
knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that
can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve
head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well
as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered
OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned
error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal
specialist was 3.4 ± 2.5 voxels (0.10 ± 0.07 mm).
A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the
retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an
increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that
detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels
in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation
and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR
measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving
a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are
used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels
in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in
the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is
used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus
photographs a reference standard that indicates for the major vessels in the images whether they are an artery or
a vein. We compared the AVR values produced by our system with those determined using a computer assisted
method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution,
3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management
of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image
registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a
registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT)
framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale
space to find salient feature points. It then uses histograms of the local gradient directions around each found
extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing
the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH)
and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set
pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy
when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0±3.3
voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations.
The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in
vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
Computer-aided Diagnosis (CAD) systems for the automatic identification of abnormalities in retinal images are
gaining importance in diabetic retinopathy screening programs. A huge amount of retinal images are collected
during these programs and they provide a starting point for the design of machine learning algorithms. However,
manual annotations of retinal images are scarce and expensive to obtain. This paper proposes a dynamic CAD
system based on active learning for the automatic identification of hard exudates, cotton wool spots and drusen
in retinal images. An uncertainty sampling method is applied to select samples that need to be labeled by an
expert from an unlabeled set of 4000 retinal images. It reduces the number of training samples needed to obtain
an optimum accuracy by dynamically selecting the most informative samples. Results show that the proposed
method increases the classification accuracy compared to alternative techniques, achieving an area under the
ROC curve of 0.87, 0.82 and 0.78 for the detection of hard exudates, cotton wool spots and drusen, respectively.
Separating the retinal vascular tree into arteries and veins is important for quantifying vessel changes that
preferentially affect either the veins or the arteries. For example the ratio of arterial to venous diameter, the
retinal a/v ratio, is well established to be predictive of stroke and other cardiovascular events in adults, as well
as the staging of retinopathy of prematurity in premature infants. This work presents a supervised, automatic
method that can determine whether a vessel is an artery or a vein based on intensity and derivative information.
After thinning of the vessel segmentation, vessel crossing and bifurcation points are removed leaving a set of
vessel segments containing centerline pixels. A set of features is extracted from each centerline pixel and using
these each is assigned a soft label indicating the likelihood that it is part of a vein. As all centerline pixels in
a connected segment should be the same type we average the soft labels and assign this average label to each
centerline pixel in the segment. We train and test the algorithm using the data (40 color fundus photographs)
from the DRIVE database1 with an enhanced reference standard. In the enhanced reference standard a fellowship
trained retinal specialist (MDA) labeled all vessels for which it was possible to visually determine whether it was
a vein or an artery. After applying the proposed method to the 20 images of the DRIVE test set we obtained
an area under the receiver operator characteristic (ROC) curve of 0.88 for correctly assigning centerline pixels
to either the vein or artery classes.
Glaucoma is a group of diseases which can cause vision loss and blindness due to gradual damage to the optic
nerve. The ratio of the optic disc cup to the optic disc is an important structural indicator for assessing the
presence of glaucoma. The purpose of this study is to develop and evaluate a method which can segment the
optic disc cup and neuroretinal rim in spectral-domain OCT scans centered on the optic nerve head. Our method
starts by segmenting 3 intraretinal surfaces using a fast multiscale 3-D graph search method. Based on one of
the segmented surfaces, the retina of the OCT volume is flattened to have a consistent shape across scans and
patients. Selected features derived from OCT voxel intensities and intraretinal surfaces were used to train a
k-NN classifier that can determine which A-scans in the OCT volume belong to the background, optic disc cup
and neuroretinal rim. Through 3-fold cross validation with a training set of 20 optic nerve head-centered OCT
scans (10 right eye scans and 10 left eye scans from 10 glaucoma patients) and a testing set of 10 OCT scans (5
right eye scans and 5 left eye scans from 5 different glaucoma patients), segmentation results of the optic disc
cup and rim for all 30 OCT scans were obtained. The average unsigned errors of the optic disc cup and rim were
1.155 ± 1.391 pixels (0.035 ± 0.042 mm) and 1.295 ± 0.816 pixels (0.039 ± 0.024 mm), respectively.
Retinal vessel segmentation is a prerequisite for the analysis of vessel parameters such as tortuosity, variation
of the vessel width along the vessel and the ratio between the venous and arterial vessel width. This analysis
can provide indicators for the presence of a wide range of diseases. Different types of approaches have been
proposed to segment the retinal vasculature and two important groups are vessel tracking and pixel processing
based methods. An advantage of tracking based methods is the guaranteed connectedness of vessel segments,
in pixel processing based methods connectedness is not guaranteed. In this work an automated vessel linking
framework is presented. The framework links together separate pieces of the retinal vasculature into a connected
vascular tree. To determine which vessel sections should be linked together the use of a supervised cost function is
proposed. Evaluation is performed on the vessel centerlines. The results show that the vessel linking framework
outperforms other automated vessel linking methods especially for the narrowest vessels.
The optic disc margin is of interest due to its use for detecting and managing glaucoma. We developed a
method for segmenting the optic disc margin of the optic nerve head (ONH) in spectral-domain optical coherence
tomography (OCT) images using a graph-theoretic approach. A small number of slices surrounding the Bruch's
membrane opening (BMO) plane was taken and used for creating planar 2-D projection images. An edge-based
cost function - more specifically, a signed edge-based term favoring a dark-to-bright transition in the
vertical direction of polar projection images (corresponding to the radial direction in Cartesian coordinates)
- was obtained. Information from the segmented vessels was used to suppress the vasculature influence by
modifying the polar cost function and remedy the segmentation difficulty due to the presence of large vessels.
The graph search was performed in the modified edge-based cost images. The algorithm was tested on 22
volumetric OCT scans. The segmentation results were compared with expert segmentations on corresponding
stereo fundus disc photographs. We found a signed mean difference of 0.0058 ± 0.0706 mm and an unsigned
mean difference of 0.1083 ± 0.0350 mm between the automatic and expert segmentations.
The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.
In this work we compare the performance of a number of vessel segmentation algorithms on a newly constructed retinal vessel image database. Retinal vessel segmentation is important for the detection of numerous eye diseases and plays an important role in automatic retinal disease screening systems. A large number of methods for retinal vessel segmentation have been published, yet an evaluation of these methods on a common database of screening images has not been performed. To compare the performance of retinal vessel segmentation methods we have constructed a large database of retinal images. The database contains forty images in which the vessel trees have been manually segmented. For twenty of those forty images a second independent manual segmentation is available. This allows for a comparison between the performance of automatic methods and the performance of a human observer. The database is available to the research community. Interested researchers are encouraged to upload their segmentation results to our website (http://www.isi.uu.nl/Research/Databases). The performance of five different algorithms has been compared. Four of these methods have been implemented as described in the literature. The fifth pixel classification based method was developed specifically for the segmentation of retinal vessels and is the only supervised method in this test. We define the segmentation accuracy with respect to our gold standard as the performance measure. Results show that the pixel classification method performs best, but the second observer still performs significantly better.
Conventional methods for the segmentation of lung fields from thorax CT scans are based on thresholding. They rely on a large grey value contrast between the lung parenchyma and surrounding tissues. In the presence of consolidations or other high density pathologies, these methods fail. For the segmentation of such scans, a lung shape should be induced without relying solely on grey level information. We present a segmentation-by-registration approach to segment the lung fields from several thin-slice CT scans (slice-thickness 1 mm) containing high density pathologies. A scan of a normal subject is elastically registered to each of the abnormal scans. Applying the found deformations to a lung mask created for the normal subject, a segmentation of the abnormal lungs is found. We implemented a conventional lung field segmentation method and compared it to the one using non-rigid registration techniques. The results of the algorithms were evaluated against manual segmentations in several slices of each scan. It is shown that the segmentation-by-registration approach can successfully identify the lung regions where the conventional method fails.
The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.