PurposeOur study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.ApproachWe introduce an automated DL-based approach that leverages anatomical information from the lung’s vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.ResultsExperimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model’s generalization capabilities. Finally, the method’s robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.ConclusionsIncorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.
Glaucoma is a global disease that leads to blindness due to pathological loss of retinal ganglion cell axons in the optic nerve head (ONH). The presented project aims at improving a computational algorithm for estimating the thickness and surface area of the waist of the nerve fiber layer in the ONH. Our currently developed deep learning AI algorithm meets the need for a morphometric parameter that detects glaucomatous change earlier than current clinical follow-up methods. In 3D OCT image volumes, two different AI algorithms identify the Optic nerve head Pigment epithelium Central Limit (OPCL) and the Inner limit of the Retina Closest Point (IRCP) in a 3D grid. Our computational algorithm includes the undulating surface area of the waist of the ONH, as well as waist thickness. In 16 eyes of 16 non-glaucomatous subjects aged [20;30] years, the mean difference in minimal thickness of the waist of the nerve fiber layer between our previous and the current post-processing strategies was estimated as CIμ(0.95) 0 ±1 μm (D.f. 15). The mean surface area of the waist of the nerve fiber layer in the optic nerve head was 1.97 ± 0.19 mm2. Our computational algorithm results in slightly higher values for surface areas compared to published work, but as expected, this may be due to surface undulations of the waist being considered. Estimates of the thickness of the waist of the ONH yields estimates of the same order as our previous computational algorithm.
The present project aims at developing a fully automatic software for estimation of the waist of the nerve fiber layer in the Optic Nerve Head (ONH) angularly resolved in the frontal plane as a tool for morphometric monitoring of glaucoma. The waist of the nerve fiber layer is here defined as Pigment epithelium central limit –Inner limit of the retina – Minimal Distance, (PIMD). 3D representations of the ONH were collected with high resolution OCT in young not glaucomatous eyes and glaucomatous eyes. An improved tool for manual annotation was developed in Python. This tool was found user friendly and to provide sufficiently precise manual annotation. PIMD was automatically estimated with a software consisting of one AI model for detection of the inner limit of the retina and another AI model for localization of the Optic nerve head Pigment epithelium Central limit (OPCL). In the current project, the AI model for OPCL localization was retrained with new data manually annotated with the improved tool for manual annotation both in not glaucomatous eyes and in glaucomatous eyes. Finally, automatic annotations were compared to 3 annotations made by 3 independent annotators in an independent subset of both the not glaucomatous and the glaucomatous eyes. It was found that the fully automatic estimation of PIMD-angle overlapped the 3 manual annotators with small variation among the manual annotators. Considering interobserver variation, the improved tool for manual annotation provided less variation than our original annotation tool in not glaucomatous eyes suggesting that variation in glaucomatous eyes is due to variable pathological anatomy, difficult to annotate without error. The small relative variation in relation to the substantial overall loss of PIMD in the glaucomatous eyes compared to the not glaucomatous eyes suggests that our software for fully automatic estimation of PIMD-angle can now be implemented clinically for monitoring of glaucoma progression.
In this paper, an automatic strategy for measuring the thickness of the nerve fiber layer around the optic nerve head is proposed. The strategy presented uses two independent 2D U-nets that each perform a segmentation task. One network learns to segment the vitreous body in standard Cartesian image domain and the second learns to segment a disc around a point of interest in a polar image domain. The output from the neural networks are then combined to find the thickness of the waist of the nerve fiber layer as a function of the angle around the center of the optic nerve head in the frontal plane. The two networks are trained with a combined data set that has been captured on two separate OCT systems (spectral domain Topcon OCT 2000 and swept source Topcon OCT Triton) which have been annotated with a semi-automatic algorithm by up to 3 annotators. Initial results show that the automatic algorithm produces results that are comparable to the results from the semi-automatic algorithm used for reference, in a fraction of the time, independent of the annotator. The automatic algorithm has the potential to replace the semi-automatic algorithm and opens the possibility for clinical routine estimation of the nerve fiber layer. This would in turn allow the detection of loss of nerve fiber layer earlier than before which is anticipated to be important for detection of glaucoma.
In MRI neuroimaging, the shimming procedure is used before image acquisition to correct for inhomogeneity of the static magnetic field within the brain. To correctly adjust the field, the brain’s location and edges must first be identified from quickly-acquired low resolution data. This process is currently carried out manually by an operator, which can be time-consuming and not always accurate. In this work, we implement a quick and automatic technique for brain segmentation to be potentially used during the shimming. Our method is based on two main steps. First, a random forest classifier is used to get a preliminary segmentation from an input MRI image. Subsequently, a statistical shape model of the brain, which was previously generated from ground-truth segmentations, is fitted to the output of the classifier to obtain a model-based segmentation mask. In this way, a-priori knowledge on the brain’s shape is included in the segmentation pipeline. The proposed methodology was tested on low resolution images of rat brains and further validated on rabbit brain images of higher resolution. Our results suggest that the present method is promising for the desired purpose in terms of time efficiency, segmentation accuracy and repeatability. Moreover, the use of shape modeling was shown to be particularly useful when handling low-resolution data, which could lead to erroneous classifications when using only machine learning-based methods.
Intensity inhomogeneity is a great challenge for automated organ segmentation in magnetic resonance (MR) images. Many segmentation methods fail to deliver satisfactory results when the images are corrupted by a bias field. Although inhomogeneity correction methods exist, they often fail to remove the bias field completely in knee MR images. We present a new iterative approach that simultaneously predicts the segmentation mask of knee structures using a 3D U-net and estimates the bias field in 3D MR knee images using partial convolution operations. First, the test images run through a trained 3D U-net to generate a preliminary segmentation result, which is then fed to the partial convolution filter to create a preliminary estimation of the bias field using the segmented bone mask. Finally, the estimated bias field is then used to produce bias field corrected images as the new inputs to the 3D U-net. Through this loop, the segmentation results and bias field correction are iteratively improved. The proposed method was evaluated on 20 proton-density (PD)-weighted knee MRI scans with manually created segmentation ground truth using 10 fold cross-validation. In our preliminary experiments, the proposed methods outperformed conventional inhomogeneity-correction-plus-segmentation setup in terms of both segmentation accuracy and speed.
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate
segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However,
traditional implementations of this method are computationally expensive. This drawback was recently tackled through the
so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In
this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired
through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are
used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity
is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between
the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior
smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches
in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in
challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes
few seconds to compute, which makes it suitable for clinical settings.
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points.
For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added.
The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1 mm.
KEYWORDS: Image segmentation, Medical imaging, Image processing algorithms and systems, Partial differential equations, Detection and tracking algorithms, Radiology, Image visualization, Visualization, Information technology, 3D metrology
To accelerate level-set based abdominal aorta segmentation on CTA data, we propose a periodic monotonic speed
function, which allows segments of the contour to expand within one period and to shrink in the next period, i.e.,
coherent propagation. This strategy avoids the contour's local wiggling behavior which often occurs during the
propagating when certain points move faster than the neighbors, as the curvature force will move them backwards even
though the whole neighborhood will eventually move forwards. Using coherent propagation, these faster points will,
instead, stay in their places waiting for their neighbors to catch up. A period ends when all the expanding/shrinking
segments can no longer expand/shrink, which means that they have reached the border of the vessel or stopped by the
curvature force. Coherent propagation also allows us to implement a modified narrow band level set algorithm that
prevents the endless computation in points that have reached the vessel border. As these points' expanding/shrinking
trend changes just after several iterations, the computation in the remaining iterations of one period can focus on the
actually growing parts. Finally, a new convergence detection method is used to permanently stop updating the local level
set function when the 0-level set is stationary in a voxel for several periods. The segmentation stops naturally when all
points on the contour are stationary. In our preliminary experiments, significant speedup (about 10 times) was achieved
on 3D data with almost no loss of segmentation accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.