Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
Probabilistic atlas based on human anatomical structure has been widely used for organ segmentation. The challenge is how to register the probabilistic atlas to the patient volume. Additionally, there is the disadvantage that the conventional probabilistic atlas may cause a bias toward the specific patient study due to a single reference. Hence, we propose a template matching framework based on an iterative probabilistic atlas for organ segmentation. Firstly, we find a bounding box for the organ based on human anatomical localization. Then, the probabilistic atlas is used as a template to find the organ in this bounding box by using template matching technology. Comparing our method with conventional and recently developed atlas-based methods, our results show an improvement in the segmentation accuracy for multiple organs (p < 0:00001).
KEYWORDS: Lawrencium, Magnetic resonance imaging, Super resolution, Visualization, Image fusion, Image resolution, Image processing, Associative arrays, Data acquisition, 3D vision
Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.
KEYWORDS: Lawrencium, Associative arrays, Magnetic resonance imaging, Super resolution, Chemical species, Prototyping, Data modeling, Image resolution, Image processing, Feature extraction
This study addresses the problem of generating a high-resolution (HR) MRI volume from a single low-resolution (LR)
MRI input volume. Recent researches have proved that sparse coding can be successfully applied for single-frame super-resolution for natural images, which is based on good reconstruction of any local image patch with a sparse linear combination of atoms taken from an appropriate over-complete dictionary. This study adapts the basic idea of sparse code-based
super-resolution (SCSR) for MRI volume data, and then improves the dictionary learning strategy in the conventional
SCSR for achieving the precise sparse representation of HR volume patches. In the proposed MRI super-resolution strategy, we only learn the dictionary of the HR MRI volume patches with sparse coding algorithm, and then propagate the HR
dictionary to the LR dictionary by mathematical analysis for calculating the sparse representation (coefficients) of any LR
local input volume patch. The unknown corresponding HR volume patch can be reconstructed with the sparse coefficients
from the LR volume patch and the corresponding HR dictionary. We validate that the proposed SCSR strategy through
dictionary propagation can recover much clearer and more accurate HR MRI volume than the conventional interpolated
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.