KEYWORDS: Education and training, Ultrasonography, Muscles, Principal component analysis, Statistical analysis, Motion analysis, Data modeling, Diseases and disorders, Design and modelling, Visualization
Purpose4D Transperineal ultrasound (TPUS) is used to examine female pelvic floor disorders. Muscle movement, like performing a muscle contraction or a Valsalva maneuver, can be captured on TPUS. Our work investigates the possibility for unsupervised analysis and classification of the TPUS data.ApproachAn unsupervised 3D-convolutional autoencoder is trained to compress TPUS volume frames into a latent feature vector (LFV) of 128 elements. The (co)variance of the features are analyzed and statistical tests are performed to analyze how features contribute in storing contraction and Valsalva information. Further dimensionality reduction is applied (principal component analysis or a 2D-convolutional autoencoder) to the LFVs of the frames of the TPUS movie to compress the data and analyze the interframe movement. Clustering algorithms (K-means clustering and Gaussian mixture models) are applied to this representation of the data to investigate the possibilities of unsupervised classification.ResultsThe majority of the features show a significant difference between contraction and Valsalva. The (co)variance of the features from the LFVs was investigated and features most prominent in capturing muscle movement were identified. Furthermore, the first principal component of the frames from a single TPUS movie can be used to identify movement between the frames. The best classification results were obtained after applying principal component analysis and Gaussian mixture models to the LFVs of the TPUS movies, yielding a 91.2% accuracy.ConclusionUnsupervised analysis and classification of TPUS data yields relevant information about the type and amount of muscle movement present.
Absolute myocardial perfusion imaging (MPI) can be beneficial in the diagnosis and prognosis of patients with coronary artery disease. However, validation and standardization of perfusion estimates across centers is needed to ensure safe and adequate integration into clinical routine. MPI phantoms can contribute to this clinical need as these models can provide ground truth evaluation of absolute MPI in a simplified, though controlled setup. This work presents verification of phantom design choices, including the justification for using sorbents in mimicking contrast kinetics (i.e., tracer uptake and retention). Moreover, we compare preliminary phantom results obtained with SPECT-MPI with a patient example. Finally, we applied a general two-tissue compartment model to describe the obtained phantom time activity curve data. These evaluation steps support shaping of a suitable verification and validation strategy for the multimodal myocardial perfusion phantom design and realization.
Purpose: Detailed blood flow studies may contribute to improvements in carotid artery stenting. High-frame-rate contrast-enhanced ultrasound followed by particle image velocimetry (PIV), also called echoPIV, is a technique to study blood flow patterns in detail. The performance of echoPIV in presence of a stent has not yet been studied extensively. We compared the performance of echoPIV in stented and nonstented regions in an in vitro flow setup.
Approach: A carotid artery stent was deployed in a vessel-mimicking phantom. High-frame-rate contrast-enhanced ultrasound images were acquired with various settings. Signal intensities of the contrast agent, velocity values, and flow profiles were calculated.
Results: The results showed decreased signal intensities and correlation coefficients inside the stent, however, PIV analysis in the stent still resulted in plausible flow vectors.
Conclusions: Velocity values and laminar flow profiles can be measured in vitro in stented arteries using echoPIV.
Intraoperative margin assessment during prostate cancer (PCa) surgery might reduce the number of positive surgical margins (PSM). Cerenkov Luminescence Imaging (CLI) based on optical imaging of PET radiopharmaceuticals is suitable for this purpose. Previous CLI research has been conducted with 18Fluorine, however, 68Gallium has more favorable CLI properties and can be coupled to a prostate cancer specific tracer: the prostate-specific membrane antigen (68Ga-PSMA). Light yield, resolution and camera sensitivity of 68Ga and 18F for CLI were investigated in a pre-clinical setting. CLI images were acquired using the LightPath system, with an exposure time of 120s, 2×2 binning and 300s, 8×8 binning. Three Eppendorf tubes (1mL) with different radioactivity concentrations (2.5, 10 and 40kBq/mL) of 18F and 68Ga were imaged. For both isotopes, an excellent linear relationship between the radioactivity concentration and detected light yield was observed (R2=0.99). 68Ga showed 22× more light yield compared to 18F, thus enabled lower detectable radioactivity concentration levels (1.2 vs. 23.7kBq/mL). Based on these promising results, a prospective feasibility study for intraoperative prostate cancer specimen CLI measurements with 68Ga-PSMA was designed and the first patients were enrolled in this study. The prostate was imaged ex vivo with the LightPath system ~70 minutes after injection of ~100MBq 68Ga-PSMA. Hotspots on the CLI images were marked for comparison with histopathology and corresponded to a PSM, defined as tumor on ink. In the first patients, CLI correctly identified all patients with a PSM. The encouraging preliminary results motivated for continuation of this trial.
Introduction. Treatment choice for extracranial carotid artery widening, also called aneurysm, is difficult. Blood flow simulation and experimental visualization can be supportive in clinical decision making and patient-specific treatment prediction. This study aims to simulate and validate the effect of flow-diverting stent placement on blood flow characteristics using numerical and in vitro simulation techniques in simplified carotid artery and aneurysm models. Methods. We have developed a workflow from geometry design to flow simulations and in vitro measurements in a carotid aneurysm model. To show feasibility of the numerical simulation part of the workflow that uses an immersed boundary method, we study a model geometry of an extracranial carotid artery aneurysm and put a flow-diverting stent in the aneurysm. We use ultrasound particle image velocimetry (PIV) to visualize experimentally the flow inside the aneurysm model. Results. Feasibility of ultrasound visualization of the flow, virtual flow-diverting stent placement and numerical flow simulation are presented. Flow is resolved to scales much smaller than the cross section of individual wires of the flow-diverting stent. Numerical analysis in stented model introduced 25% reduction of the blood flow inside the aneurysm sac. Quantitative comparison of experimental and numerical results showed agreement in 1D velocity profiles. Discussion/conclusion. We find good numerical convergence of the simulations at appropriate spatial resolutions using the immersed boundary method. This allows us to quantify the changes in the flow in model geometries after deploying a flow-diverting stent. We visualized the physiological blood flow in a 1-to-1 aneurysm model, using PIV, showing a good correspondence to the numerical simulations. The novel workflow enables numerical as well as experimental flow simulations in patient-specific cases before and after flow-diverting stent placement. This may contribute to endovascular treatment prediction.
Computed tomography is a standard diagnostic imaging technique for patients with traumatic brain injury (TBI). A limitation is the poor-to-moderate sensitivity for small traumatic hemorrhages. A pilot study using an automatic method to detect hemorrhages <10 mm in diameter in patients with TBI is presented. We have created an average image from 30 normal noncontrast CT scans that were automatically aligned using deformable image registration as implemented in Elastix software. Subsequently, the average image was aligned to the scans of TBI patients, and the hemorrhages were detected by a voxelwise subtraction of the average image from the CT scans of nine TBI patients. An experienced neuroradiologist and a radiologist in training assessed the presence of hemorrhages in the final images and determined the false positives and false negatives. The 9 CT scans contained 67 small haemorrhages, of which 97% was correctly detected by our system. The neuroradiologist detected three false positives, and the radiologist in training found two false positives. For one patient, our method showed a hemorrhagic contusion that was originally missed. Comparing individual CT scans with a computed average may assist the physicians in detecting small traumatic hemorrhages in patients with TBI.
Institutional diagnostic workflows regarding coronary artery disease (CAD) may differ greatly. Myocardial perfusion imaging (MPI) is a commonly used diagnostic method in CAD, whereby multiple modalities are deployed to assess relative or absolute myocardial blood flow (MBF) (e.g. with SPECT, PET, MR, CT, or combinations). In line with proper clinical decision-making, it is essential to assess institutional MPI test validity by confronting MBF assessment to a ground truth. Our research focuses on developing such validation instrument/method for MPI by means of simulating controlled myocardial perfusion in a phantom flow setup. A first step was made in the process of method development and validation by specifying basic requirements for the phantom flow setup. First tests in CT-MPI were aimed to gain experience in clinical testing, to verify to which extent the set requirements are met, and to evaluate the steps needed to further improve accuracy and reproducibility of measurements. The myocardium was simulated as a static cylinder and placed in a controllable pulsatile flow circuit whereby using flow sensors as reference. First flow experiments were performed for different stroke volumes (20-35 mL/stroke). After contrast injection, dynamic MPI-CT scans (SOMATOM Force, Siemens) were obtained to investigate the relation between first-pass measured and computed flow. We observed a moderate correlation; hence, the required accuracy and reproducibility levels were not met. However, we have gained new insights in factors regarding the measurement setup and MBF computation process that might affect instrument validation, which we will incorporate in future flow setup design and testing.
Introduction: To improve carotid artery stenting (CAS), more information about the functioning of the stent is needed. Therefore, a method that can image the flow near and around a stent is required. The aim of this study was to evaluate the performance of high-frame-rate contrast-enhanced ultrasound (HFR CEUS) in the presence of a stent. Methodology: HFR CEUS acquisitions of a carotid artery phantom, a silicone tube with diameter 8 mm, with and without a stent were acquired at transmit voltages of 2V, 4V and 10V using a Verasonics ultrasound system and C5-2 probe. Different concentrations of ultrasound contrast agent (UCA) were tested in a blood mimicking fluid (BMF). Particle image velocimetry (PIV) analysis was performed on Singular Value Decomposition (SVD) filtered images. Mean and peak velocities, and correlation coefficients were compared between stented and non-stented regions. Also, experimental results were compared with theoretical and numerical models. Results: The averaged experimental mean velocity (0.113 m/s) was significant lower than the theoretical and numerical mean velocity (0.129 m/s). The averaged experimental peak velocity (0.152 m/s) was significant lower than the theoretical and numerical peak velocity (0.259 m/s). Correlation coefficients and averaged mean velocity values were lower (difference of 0.022 m/s) in stented regions compared to non-stented regions. Conclusion: In vitro experiments showed an underestimation of mean and peak velocities in stented regions compared to non-stented regions. However, the microbubbles can be tracked efficiently and the expected laminar flow profile can be quantified using HFR CEUS near and around a stent.
The application of endovascular aortic aneurysm repair has expanded over the last decade. However, the long-term performance of stent grafts, in particular durable fixation and sealing to the aortic wall, remains the main concern of this treatment. The sealing and fixation are challenged at every heartbeat due to downward and radial pulsatile forces. Yet knowledge on cardiac-induced dynamics of implanted stent grafts is sparse, as it is not measured in routine clinical follow-up. Such knowledge is particularly relevant to perform fatigue tests, to predict failure in the individual patient and to improve stent graft designs. Using a physical dynamic stent graft model in an anthropomorphic phantom, we have evaluated the performance of our previously proposed segmentation and registration algorithm to detect periodic motion of stent grafts on ECG-gated (3D+t) CT data. Abdominal aortic motion profiles were simulated in two series of Gaussian based patterns with different amplitudes and frequencies. Experiments were performed on a 64-slice CT scanner with a helical scan protocol and retrospective gating. Motion patterns as estimated by our algorithm were compared to motion patterns obtained from optical camera recordings of the physical stent graft model in motion. Absolute errors of the patterns' amplitude were smaller than 0.28 mm. Even the motion pattern with an amplitude of 0.23 mm was measured, although the amplitude of motion was overestimated by the algorithm with 43%. We conclude that the algorithm performs well for measurement of stent graft motion in the mm and sub-mm range. This ultimately is expected to aid in patient-specific risk assessment and improving stent graft designs.
Sandra van der Velden, Christoph Moenninghoff, Isabel Wanke, Martha Jokisch, Christian Weimar, Rita Lopes Simoes, Anne-Marie van Cappellen van Walsum, Cornelis Slump
Alzheimer's disease (AD) is the most common form of dementia seen in the elderly. No curing medicine for AD exists at this moment. In the search for an effective medicine, research is directed towards the prediction of conversion of mild cognitive impairment (MCI) to AD. White matter hyperintensities (WMHs) have been shown to contain information regarding the development of AD, although non-conclusive results are found in literature. These studies often use qualitative measures to describe WMHs, which is time consuming and prone to variability. To investigate the relation between WMHs and the development of AD, algorithms to automatically determine quantitative properties in terms of volume and spatial distribution of WMHs are developed and compared between normal controls and MCI subjects. MCI subjects have a significantly higher total volume of WMHs than normal controls. This difference persists when lesions are classified according to their distance to the ventricular wall. Spatial distribution is also described by defining different brain regions based on a common coordinate system. This reveals that MCI subjects have a larger WMH volume in the upper part of the brain compared to normal controls. In four subjects, the change of WMH properties over time is studied in detail. Although such a small dataset cannot be used to give definitive conclusions, the data suggests that progression of WMHs in subjects with a low lesion load is caused by an increase in the number of lesions and by the progression of juxtacortical lesions. In subjects with a larger lesion load, progression is caused by expansion of pre-existing lesions.
Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30±0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.
The Alberta Stroke Program Early CT score (ASPECTS) scoring method is frequently used for quantifying early ischemic changes (EICs) in patients with acute ischemic stroke in clinical studies. Varying interobserver agreement has been reported, however, with limited agreement. Therefore, our goal was to develop and evaluate an automated brain densitometric method. It divides CT scans of the brain into ASPECTS regions using atlas-based segmentation. EICs are quantified by comparing the brain density between contralateral sides. This met hod was optimized and validated using CT data from 10 and 63 patients, respectively. The automated method was validated against manual ASPECTS, stroke severity at baseline and clinical outcome after 7 to 10 days (NIH Stroke Scale, NIHSS) and 3 months (modified Rankin Scale). Manual and automated ASPECTS showed similar and statistically significant correlations with baseline NIHSS (R=−0.399 and −0.277, respectively) and with follow-up mRS (R=−0.256 and −0.272), except for the follow-up NIHSS. Agreement between automated and consensus ASPECTS reading was similar to the interobserver agreement of manual ASPECTS (differences <1 point in 73% of cases). The automated ASPECTS method could, therefore, be used as a supplementary tool to assist manual scoring.
In the progressive stages of cancer, metastatic lesions in often develop in the femur. The accompanying pain and risk of fracture dramatically affect the quality of life of the patient. Radiotherapy is often administered as palliative treatment to relieve pain and restore the bone around the lesion. It is thought to affect the bone mineralization of the treated region, but the quantitative relation between radiation dose and femur remineralization remains unclear. A new framework for the longitudinal analysis of CT-scans of patients receiving radiotherapy is presented to investigate this relationship. The implemented framework is capable of automatic calibration of Hounsfield Units to calcium equivalent values and the estimation of a prediction interval per scan. Other features of the framework are temporal registration of femurs using elastix, transformation of arbitrary Regions Of Interests (ROI), and extraction of metrics for analysis. Build in Matlab, the modular approach aids easy adaptation to the pertinent questions in the explorative phase of the research. For validation purposes, an in-vitro model consisting of a human cadaver femur with a milled hole in the intertrochanteric region was used, representing a femur with a metastatic lesion. The hole was incrementally stacked with plates of PMMA bone cement with variable radiopaqueness. Using a Kolmogorov-Smirnov (KS) test, changes in density distribution due to an increase of the calcium concentration could be discriminated. In a 21 cm3 ROI, changes in 8% of the volume from 888 ± 57mg • ml−1 to 1000 ± 80mg • ml−1 could be statistically proven using the proposed framework. In conclusion, the newly developed framework proved to be a useful and flexible tool for the analysis of longitudinal CT data.
Visual estimation of tumor and stroma proportions in microscopy images yields a strong, Tumor-(lymph)Node- Metastasis (TNM) classification-independent predictor for patient survival in colorectal cancer. Therefore, it is also a potent (contra)indicator for adjuvant chemotherapy. However, quantification of tumor and stroma through visual estimation is highly subject to intra- and inter-observer variability. The aim of this study is to develop and clinically validate a method for objective quantification of tumor and stroma in standard hematoxylin and eosin (H and E) stained microscopy slides of rectal carcinomas. A tissue segmentation algorithm, based on supervised machine learning and pixel classification, was developed, trained and validated using histological slides that were prepared from surgically excised rectal carcinomas in patients who had not received neoadjuvant chemotherapy and/or radiotherapy. Whole-slide scanning was performed at 20× magnification. A total of 40 images (4 million pixels each) were extracted from 20 whole-slide images at sites showing various relative proportions of tumor and stroma. Experienced pathologists provided detailed annotations for every extracted image. The performance of the algorithm was evaluated using cross-validation by testing on 1 image at a time while using the other 39 images for training. The total classification error of the algorithm was 9.4% (SD = 3.2%). Compared to visual estimation by pathologists, the algorithm was 7.3 times (P = 0.033) more accurate in quantifying tissues, also showing 60% less variability. Automatic tissue quantification was shown to be both reliable and practicable. We ultimately intend to facilitate refined prognostic stratification of (colo)rectal cancer patients and enable better personalized treatment.
The measurement of the blood flow in the middle cerebral artery (MCA) using transcranial Doppler ultrasound (US) imaging is clinically relevant for the study of cerebral autoregulation. Especially in the aging population, impairement of the autoregulation may coincide or relate to loss of perfusion and consequently loss of brain function. The cerebral autoregulation can be assessed by relating the blood pressure to the blood flow in the brain. Doppler US is a widely used, non-invasive method to measure the blood flow in the MCA. However, Doppler flow imaging is known to produce results that are dependent of the operator. The angle of the probe insonation with respect to the centerline of the blood vessel is a well known factor for output variability. In patients also the skull must be traversed and the MCA must be detected, influencing the US signal intensity. In this contribution we report two studies. We describe first an in-vitro setup to study the Doppler flow in a situation where the ground truth is known. Secondly, we report on a study with healthy volunteers where the effects of small probe displacements on the flow velocity signals are investigated. For the latter purpose, a special probe holder was designed to control the experiment.
Classification methods have been proposed to detect early-stage Alzheimer’s disease using Magnetic Resonance images. In particular, dissimilarity-based classification has been applied using a deformation-based distance measure. However, such approach is not only computationally expensive but it also considers large-scale alterations in the brain only. In this work, we propose the use of image histogram distance measures, determined both globally and locally, to detect very mild to mild Alzheimer’s disease. Using an ensemble of local patches over the entire brain, we obtain an accuracy of 84% (sensitivity 80% and specificity 88%).
Aortoiliac occlusive disease (AIOD) may cause disabling claudicatio, due to progression of atherosclerotic plaque. Bypass surgery to treat AIOD has unsurpassed patency results, with 5-year patency rates up to 86%, at the expense of high complication rates (local and systemic morbidity rate of 6% and 16%). Therefore, less invasive, endovascular treatment of AOID with stents in both iliac limbs is the first choice in many cases, however, with limited results (average 5-year patency: 71%, range: 63-82%). Changes in blood flow due to an altered geometry of the bifurcation is likely to be one of the contributing factors. The aim of this study is to compare the geometry and hemodynamics of various aortoiliac stent configurations in vitro. Transparent vessel phantoms mimicking the anatomy of the aortoiliac bifurcation are used to accommodate stent configurations. Bare Metal Kissing stents (BMK), Kissing Covered (KC) stents and the Covered Endovascular Reconstruction of the Aortic Bifurcation (CERAB) configuration are investigated. The models are placed inside a flow rig capable of simulating physiologic relevant flow in the infrarenal area. Dye injection reveals flow disturbances near the neobifurcation of BMK and KC stents as well. At the radial mismatch areas of the KC stents recirculation zones are observed. With the CERAB configuration no flow reversal or large disturbances are observed. In conclusion, dye injection reveals no significant flow disturbances with the new CERAB configuration as seen with the KC and BMK stents.
Late stent graft failure is a serious complication in endovascular repair of aortic aneurysms. Better understanding
of the motion characteristics of stent grafts will be beneficial for designing future devices. In addition, analysis
of stent graft movement in individual patients in vivo can be valuable for predicting stent graft failure in these
patients.
To be able to gather information on stent graft motion in a quick and robust fashion, an automatic segmentation
method is required. In this work we compare two segmentation methods that produce a geometric model
in the form of an undirected graph. The first method tracks along the centerline of the stent and segments the
stent in 2D slices sampled orthogonal to it. The second method used a modified version of the minimum cost
path (MCP) method to segment the stent directly in 3D.
Using annotated reference data both methods were evaluated in an experiment. The results show that the
centerline-based method and the MCP-based method have an accuracy of approximately 65% and 92%, respectively.
The difference in accuracy can be explained by the fact that the centerline method makes assumptions
about the topology of the stent which do not always hold in practice. This causes difficulties that are hard and
sometimes impossible to overcome. In contrast, the MCP-based method works directly in 3D and is capable of
segmenting a large variety of stent shapes and stent types.
White matter hyperintensities are known to play a role in the cognitive decline experienced by patients suffering
from neurological diseases. Therefore, accurately detecting and monitoring these lesions is of importance. Automatic
methods for segmenting white matter lesions typically use multimodal MRI data. Furthermore, many
methods use a training set to perform a classification task or to determine necessary parameters. In this work,
we describe and evaluate an unsupervised segmentation method that is based solely on the histogram of FLAIR
images. It approximates the histogram by a mixture of three Gaussians in order to find an appropriate threshold
for white matter hyperintensities. We use a context-sensitive Expectation-Maximization method to determine
the Gaussian mixture parameters. The segmentation is subsequently corrected for false positives using the knowledge
of the location of typical FLAIR artifacts. A preliminary validation with the ground truth on 6 patients
revealed a Similarity Index of 0.73 ± 0.10, indicating that the method is comparable to others in the literature
which require multimodal MRI and/or a preliminary training step.
Osteoarthritis is one of the leading causes of pain and disability worldwide and a major health problem in
developed countries due to the gradually aging population. Though the symptoms are easily recognized and
described by a patient, it is difficult to assess the level of damage or loss of articular cartilage quantitatively. We
present a novel method for fully automated knee cartilage thickness measurement and subsequent assessment
of the knee joint. First, the point correspondence between a pre-segmented training bone model is obtained
with use of Shape Context based non-rigid surface registration. Then, a single Active Shape Model (ASM) is
used to segment both Femur and Tibia bone. The surfaces obtained are processed to extract the Bone-Cartilage
Interface (BCI) points, where the proper segmentation of cartilage begins. For this purpose, the cartilage ASM
is trained with cartilage edge positions expressed in 1D coordinates at the normals in the BCI points. The
whole cartilage model is then constructed from the segmentations obtained in the previous step. An absolute
thickness of the segmented cartilage is measured and compared to the mean of all training datasets, giving as a
result the relative thickness value. The resulting cartilage structure is visualized and related to the segmented
bone. In this way the condition of the cartilage is assessed over the surface. The quality of bone and cartilage
segmentation is validated and the Dice's coefficients 0.92 and 0.86 for Femur and Tibia bones and 0.45 and
0.34 for respective cartilages are obtained. The clinical diagnostic relevance of the obtained thickness mapping
is being evaluated retrospectively. We hope to validate it prospectively for prediction of clinical outcome the
methods require improvements in accuracy and robustness.
The perfusion of the brain is essential to maintain brain function. Stroke is an example of a decrease in blood
flow and reduced perfusion. During ischemic stroke the blood flow to tissue is hampered due to a clot inside
a vessel. To investigate the recovery of stroke patients, follow up studies are necessary. MRI is the preferred
imaging modality for follow up because of the absence of radiation dose concerns, contrary to CT. Dynamic
Susceptibility Contrast (DSC) MRI is an imaging technique used for measuring perfusion of the brain, however,
is not standard applied in the clinical routine due to lack of immediate patient benefit. Several post processing
algorithms are described in the literature to obtain cerebral blood flow (CBF). The quantification of CBF relies
on the deconvolution of a tracer concentration-time curve in an arterial and a tissue voxel. There are several
methods to obtain this deconvolution based on singular-value decomposition (SVD). This contribution describes
a comparison between the different approaches as currently there is no best practice for (all) clinical relevant
situations. We investigate the influence of tracer delay, dispersion and recirculation on the performance of the
methods. In the presence of negative delays, the truncated SVD approach overestimates the CBF. Block-circulant
and reformulated SVD are delay-independent. Due to its delay dependent behavior, the truncated SVD approach
performs worse in the presence of dispersion as well. However all SVD approaches are dependent on the amount
of dispersion. Moreover, we observe that the optimal truncation parameter varies when recirculation is added to
noisy data, suggesting that, in practice, these methods are not immune to tracer recirculation. Finally, applying
the methods to clinical data resulted in a large variability of the CBF estimates. Block-circulant SVD will work
in all situations and is the method with the highest potential.
KEYWORDS: Image registration, Image processing, Diffusion, Medical imaging, Image restoration, Image analysis, Analog electronics, Magnetic resonance imaging, Visualization, Medical diagnostics
Multi modal image registration enables images from different modalities to be analyzed in the same coordinate
system. The class of B-spline-based methods that maximize the Mutual Information between images produce
satisfactory result in general, but are often complex and can converge slowly. The popular Demons algorithm,
while being fast and easy to implement, produces unrealistic deformation fields and is sensitive to illumination
differences between the two images, which makes it unsuitable for multi-modal registration in its original form.
We propose a registration algorithm that combines a B-spline grid with deformations driven by image forces.
The algorithm is easy to implement and is robust against large differences in the appearance between the images
to register. The deformation is driven by attraction-forces between the edges in both images, and a B-spline grid
is used to regularize the sparse deformation field. The grid is updated using an original approach by weighting
the deformation forces for each pixel individually with the edge strengths. This approach makes the algorithm
perform well even if not all corresponding edges are present.
We report preliminary results by applying the proposed algorithm to a set of (multi-modal) test images.
The results show that the proposed method performs well, but is less accurate than state of the art registration
methods based on Mutual Information. In addition, the algorithm is used to register test images to manually
drawn line images in order to demonstrate the algorithm's robustness.
This contribution describes a novel algorithm for the automated quantification of visceral and subcutaneous
adipose tissue volumes from abdominal CT scans of patients referred for colorectal resection. Visceral and
subcutaneous adipose tissue volumes can accurately be measured with errors of 1.2 and 0.5%, respectively. Also
the reproducibility of CT measurements is good; a disadvantage is the amount of radiation. In this study the
diagnostic CT scans in the work - up of (colorectal) cancer were used. This implied no extra radiation. For
the purpose of segmentation alone, a low dose protocol can be applied. Obesity is a well known risk factor
for complications in and after surgery. Body Mass Index (BMI) is a widely accepted indicator of obesity, but
it is not specific for risk assessment of colorectal surgery. We report on an automated method to quantify
visceral and subcutaneous adipose tissue volumes as a basic step in a clinical research project concerning preoperative
risk assessment. The outcomes are to be correlated with the surgery results. The hypothesis is that
the balance between visceral and subcutaneous adipose tissue together with the presence of calcifications in the
major bloodvessels, is a predictive indicator for post - operatieve complications such as anastomotic leak. We
start with four different computer simulated humanoid abdominal volumes with tissue values in the appropriate
Hounsfield range at different dose levels. With satisfactory numerical results for this test, we have applied the
algorithm on over a 100 patient scans and have compared results with manual segmentations by an expert for a
smaller pilot group. The results are within a 5% difference. Compared to other studies reported in the literature,
reliable values are obtained for visceral and subcutaneous adipose tissue areas.
In the reconstruction process of photo acoustic experiments, it was observed that adding a passive element to the
experimental setup, improves the quality of the reconstruction of the object. This contribution analyzes this effect
in some detail. We consider a cylindrical configuration. We start from an artificial and theoretically constructed
optical absorption distribution that radiates sound waves when interrogated by the optical pulse. We analyze in
the experimental setup the addition of the passive element to this example. The reported investigation is a part
of a larger study on the existence, uniqueness and stability of photo acoustic inverse source reconstructions.
The purpose of our study is the evaluation of an algorithm to determine the physiological relevance of a coronary
lesion as seen in a coronary angiogram. The aim is to extract as much as possible information from a standard
coronary angiogram to decide if an abnormality, percentage of stenosis, as seen in the angiogram, results in
physiological impairment of the blood supply of the region nourished by the coronary artery. Coronary angiography,
still the golden standard, is used to determine the cause of angina pectoris based on the demonstration
of an important stenose in a coronary artery. Dimensions of a lesion such as length and percentage of narrowing
can at present easily be calculated by using an automatic computer algorithm such as Quantitative Coronary
Angiography (QCA) techniques resulting in just anatomical information ignoring the physiological relevance of
the lesion. In our study we analyze myocardial perfusion images in standard coronary angiograms in rest and
in artificial hyperemic phases, using a drug e.g. papaverine intracoronary. Setting a Region of Interest (ROI) in
the angiogram without overlying major vessels makes it possible to calculate contrast differences as a function of
time, so called time-density curves, in the basal and hyperemic phases. In minimizing motion artifacts, end diastolic
images are selected ECG based in basal and hyperemic phase in an identical ROI in the same angiographic
projection. The development of new algorithms for calculating differences in blood supply in the region as set
are presented together with the results of a small clinical case study using the standard angiographic procedure.
Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac
cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution,
abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct
a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline
temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from
taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate
how good off-line averaging approximates a non-gated scan.
Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases
of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties
(NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable
dynamic phantom.
Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging
process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data.
Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on
stentgrafts in AAA, without impairing clinical patient care.
Photoacoustic imaging is a relatively new medical imaging modality. In principle it can be used to image the
optical absorption distribution of an object by measurements of optically induced acoustic signals. Recently
we have developed a modified photoacoustic measurement system which can be used to simultaneously image
the ultrasound propagation parameters as well. By proper placement of a passive element we obtain isolated
measurements of the object's ultrasound propagation parameters, independent of the optical absorption inside
the object. This passive element acts as a photoacoustic source and measurements are obtained by allowing the
generated ultrasound signal to propagate through the object. Images of the ultrasound propagation parameters,
being the attenuation and speed of sound, can then be reconstructed by inversion of a measurement model.
This measurement model relates the projections non-linearly to the unknown images, due to ray refraction
effects. After estimating the speed of sound and attenuation distribution, the optical absorption distribution
is reconstructed. In this reconstruction problem we take into account the previously estimated speed of sound
distribution. So far, the reconstruction algorithms have been tested using computer simulations. The method
has been compared with existing algorithms and good results have been obtained.
Endovascular aortic replacement (EVAR) is an established technique, which uses stentgrafts to treat aortic
aneurysms in patients at risk of aneurysm rupture. The long-term durability of a stentgraft is affected by the
stresses and hemodynamic forces applied to it, and may be reflected by the movements of the stentgraft itself
during the cardiac cycle. A conventional CT scan (which results in a 3D volume) is not able to visualize these
movements. However, applying ECG-gating does provide insight in the motion of the stentgraft caused by
hemodynamic forces at different phases of the cardiac cycle.
The amount of data obtained is a factor of ten larger compared to conventional CT, but the radiation dose
is kept similar for patient safety. This causes the data to be noisy, and streak artifacts are more common.
Algorithms for automatic stentgraft detection must be able to cope with this.
Segmentation of the stentgraft is performed by examining slices perpendicular to the centreline. Regions with
high CT-values exist at the locations where the metallic frame penetrates the slice. These regions are well suited
for detection and sub-pixel localization. Spurious points can be removed by means of a clustering algorithm,
leaving only points on the contour of the stent. We compare the performance of several different point detection
methods and clustering algorithms. The position of the stent's centreline is calculated by fitting a circle through
these points.
The proposed method can detect several stentgraft types, and is robust against noise and streak artifacts.
Coronary angiography is the primary technique for diagnosing coronary abnormalities as it is able to locate precisely the
coronary artery lesions. However, the clinical relevance of an appearing stenosis is not that easy to assess. In previous
work we have analyzed the myocardial perfusion by comparing basal and hyperemic coronary flow. This comparison is
the basis of a Relative Coronary Flow Reserve (RCFR) measure. In a Region-of-Interest (ROI) on the angiogram the
contrast is measured as a function of time (the so-called
time-density curve). The required hyperemic state of exercise is
induced artificially by the injection of a vasodilator drug e.g. papaverine. In previous work we have presented the results
of a small study of 20 patients. In this paper we present an analysis of the sensitivity of the method for variations in X-ray
exposure between the two runs due to the Automatic Exposure Control (AEC) unit. The AEC is a system unit with
the task to ensure a constant dose rate at the entrance of the detector by making the appropriate adaptations in X-ray
factor settings for patients which range from slim to more obese. We have setup a phantom study to reveal the expected
exposure variations. We present several of the developed phantoms together with a compensation strategy.
Photoacoustic imaging is used to obtain a range of three-dimensional images representing tumor neovascularization
over a 10-day period after subcutaneous inoculation of pancreatic tumor cells in a rat. The images are
reconstructed from data measured with a double-ring photoacoustic detector. The ultrasound data originates
from the optical absorption by hemoglobin of 14 ns laser pulses at a wavelength of 1064 nm. Three-dimensional
data is obtained by using two dimensional linear scanning. Scanning and motion artifacts are reduced using a
correction method. The data is used to visualize the development of the individual blood vessels around the
growing tumor, blood concentration changes inside the tumor and growth in depth of the neovascularized region.
The three-dimensional vasculature reconstruction is created using VTK, which enables us to create a composition
of the vasculature on day seven, eight and ten and to interactively measure tumor growth in the near future.
Photoacoustic imaging is an upcoming medical imaging modality with the potential of imaging both optical and
acoustic properties of objects. We present a measurement system and outline reconstruction methods to image
both speed of sound and acoustic attenuation distributions of an object using only pulsed light excitation. These
acoustic properties can be used in a subsequent step to improve the image quality of the optical absorption
distribution. A passive element, which is a high absorbing material with a small cross-section such as a carbon
fiber, is introduced between the light beam and the object. This passive element acts as a photoacoustic source
and measurements are obtained by allowing the generated acoustic signal to propagate through the object. From
these measurements we can extract measures of line integrals over the acoustic property distribution for both
the speed of sound and the acoustic attenuation. Reconstruction of the acoustic property distributions then
comes down to the inversion of a linear system relating the obtained projection measurements to the acoustic
property distributions. We show the results of applying our approach on phantom objects. Satisfactory results
are obtained for both the reconstruction of speed of sound and the acoustic attenuation.
Photoacoustics is a hybrid imaging technique that combines the contrast available to optical imaging with
the resolution that is possessed by ultrasound imaging. The technique is based on generating ultrasound from
absorbing structures in tissue using pulsed light. In photoacoustic (PA) computerized tomography (CT) imaging,
reconstruction of the optical absorption in a subject, is performed for example by filtered backprojection. The
backprojection is performed along circular paths in image space instead of along straight lines as in X-ray CT
imaging. To achieve this, the speed-of-sound through the subject is usually assumed constant. An unsuitable
speed-of-sound can degrade resolution and contrast. We discuss here a method of actually measuring the speed-of-
sound distribution using ultrasound transmission through the subject under photoacoustic investigation. This
is achieved in a simple approach that does not require any additional ultrasound transmitter. The method uses
a passive element (carbon fiber) that is placed in the imager in the path of the illumination which generates
ultrasound by the photoacoustic effect and behaves as an ultrasound source. Measuring the time-of-flight of this
ultrasound transient by the same detector used for conventional photoacoustics, allows a speed-of-sound image
to be reconstructed. This concept is validated on phantoms.
Our purpose is in the automated evaluation of the physiological relevance of lesions in coronary angiograms. We aim to
extract as much as possible quantitative information about the physiological condition of the heart from standard
angiographic image sequences. Coronary angiography is still the gold standard for evaluating and diagnosing coronary
abnormalities as it is able to locate precisely the coronary artery lesions. The dimensions of the stenosis can be assessed
nowadays successfully with image processing based Quantitative Coronary Angiography (QCA) techniques. Our
purpose is to assess the clinical relevance of the pertinent stenosis. We therefore analyze the myocardial perfusion as
revealed in standard angiographic image sequences. In a Region-of-Interest (ROI) on the angiogram (without an
overlaying major blood vessel) the contrast is measured as a function of time (the so-called time-density curve). The
required hyperemic state of exercise is induced artificially by the injection of a vasodilator drug e.g. papaverine. In order
to minimize motion artifacts we select based on the recorded ECG signal end-diastolic images in both a basal and a
hyperemic run in the same projection to position the ROI. We present the development of the algorithms together with
results of a small study of 20 patients which have been catheterized following the standard protocol.
This paper is about the quantitative prediction of the long term outcome of the endovascular coiling treatment
of a patient's cerebral aneurysm. It is generally believed that the local hemodynamic properties of the patient's
cerebral arteries are strongly influencing the origin and growth of aneurysms. We describe our approach: modelling
the flow in a 3D Rotational Angiography (3DRA) reconstruction of the aneurysms including supplying
and draining blood vessels, in combination with simulations and experiments of artificial blood vessel phantom
constructs and measurements. The goal is to obtain insight in the observed phenomena to support the diagnostic
decision process in order to predict the outcome of the intervention with possible simulation of the flow
alternation due to the pertinent intervention.
Radiographic assessment of joint space narrowing in hand radiographs is important for determining the progression
of rheumatoid arthritis in an early stage. Clinical scoring methods are based on manual measurements that
are time consuming and subjected to intra-reader and inter-reader variance. The goal is to design an automated
method for measuring the joint space width with a higher sensitivity to change1 than manual methods. The large
variability in joint shapes and textures, the possible presence of joint damage, and the interpretation of projection
images make it difficult to detect joint margins accurately. We developed a method that uses a modified
active shape model to scan for margins within a predetermined region of interest. Possible joint space margin
locations are detected using a probability score based on the Mahalanobis distance. To prevent the detection of
false edges, we use a dynamic programming approach. The shape model and the Mahalanobis scoring function
are trained with a set of 50 hand radiographs, in which the margins have been outlined by an expert.
We tested our method on a test set of 50 images. The method was evaluated by calculating the mean absolute
difference with manual readings by a trained person. 90% of the joint margins are detected within 0.12 mm. We
found that our joint margin detection method has a higher precision considering reproducibility than manual
readings. For cases where the joint space has disappeared, the algorithm is unable to estimate the margins. In
these cases it would be necessary to use a different method to quantify joint damage.
X-ray coronary angiography is widely used to determine the
presence of a stenosis. This paper discusses an approach towards
the detection of the functional severity of a stenosis using the
relative velocity of the contrast agent. The velocity of the
contrast is measured using the arrival time at several locations
on a coronary artery. This is done by placing multiple Regions Of
Interest(ROI) equally spaced on the artery. The location of these
ROIs varies in time because of the cardiac motion. Therefore, an
artery tracing and tracking algorithm is used to estimate the
location of the ROIs in time. The arrival time of the contrast can
be estimated by measuring the image intensity in these ROIs. Using
the arrival times in several ROIs, a qualitative velocity can be
estimated. Altering the velocity of the blood pharmacologically,
by inducing hyperemic conditions, results in a qualitative change
in velocity detected by the algorithm. No change in velocity may indicate a severe flow limiting stenosis.
Robust and accurate segmentation methods are important for the computerized evaluation of medical images. For treatment of rheumatoid arthritis, joint damage assessment in radiographs of hands is frequently used for monitoring disease progression. Current clinical scoring methods are based on visual measurements that are time-consuming and subject to intra and inter-reader variance. A solution may be found in the development of partially automated assessment procedures. This requires reliable segmentation algorithms. Our work demonstrates a segmentation method based on multiple connected active appearance models (AAM) with multiple search steps using different quality levels. The quality level can be regulated by setting the image resolution and the number of landmarks in the AAMs. We performed experiments using two models of different quality levels for shape and texture information. Both models included AAMs for the carpal region, the metacarpals, and all phalanges. By starting an iterative search with the faster, low-quality model, we were able to determine the initial parameters of the second, high-quality model. After the second search, the results showed successful segmentation for 22 of 30 test images. For these images, 70% of the landmarks were found within 1.3 mm difference from manual placement by an expert. The multi-level search approach resulted in a reduction of 50% in calculation time compared to a search using a single model. Results are expected to improve when the model is refined by increasing the number of training examples and the resolution of the models.
The purpose of our research is to describe the ultimate X-ray detector for angiography. Angiography is a well established X-ray imaging technique for the examination of blood vessels. Contrast agent is injected followed by X-ray exposures and possible obstructions in the blood vessels can be visualized. Standard angiography primarily inspects for possible occlusions and views the vessels as rigid pipes. However, due to the beating heart the flow in arteries is pulsatile. Healthy arteries are not rigid tubes but adapt to various pressure and flow conditions. Our interest is in the (small) response of the artery on the pulse flow. If the arteries responses elastically on the pulse flow, we can expect that it is still healthy. So the detection of artery diameter variations is of interest for the detection of atherosclerosis in an early stage. In this contribution we specify and test a model X-ray detector for its abilities to record the responses of arteries on pulsatile propagating flow distributions. Under normal physiological conditions vessels respond with a temporal increase in arterial internal cross-sectional area of order 10%. This pulse flow propagates along the arteries in response of the left ventricle ejections. We show results of the detection of simulated vessel distensabilities for the model detector and discuss salient parameters features.
A cerebral aneurysm is a persistent localized dilatation of the wall of a cerebral vessel. One of the techniques applied to treat cerebral aneurysms is the Guglielmi detachable coil (GDC) embolization. The goal of this technique is to embolize the aneurysm with a mesh of platinum coils to reduce the risk of aneurysm rupture. However, due to the blood pressure it is possible that the platinum wire is deformed. In this case, re-embolization of the aneurysm is necessary.
The aim of this project is to develop a computer program to estimate the volume of cerebral aneurysms from archived laser hard copies of biplane digital subtraction angiography (DSA) images. Our goal is to determine the influence of the packing percentage, i.e., the ratio between the volume of the aneurysm and the volume of the coil mesh, on the stability of the coil mesh in time. The method we apply to estimate the volume of the cerebral aneurysms is based on the generation of a 3-D geometrical model of the aneurysm from two biplane DSA images. This 3-D model can be seen as an stack of 2-D ellipsis. The volume of the aneurysm is the result of performing a numerical integration of this stack. The program was validated using balloons filled with contrast agent. The availability of 3-D data for some of the aneurysms enabled to perform a comparison of the results of this method with techniques based on 3-D data.
The communicational and computational demands of neural networks are hard to satisfy in a digital technology. Temporal computing addresses this problem by iteration, but leaves a slow network. Spatial computing only became an option with the coming of modern FPGA devices. The paper provides two examples. First the balance between area and time is discussed on the realization of a modular feed-forward network. Second, the design of real-time image processing through a Cellular Neural Network is treated. In both examples, reconfiguration can be applied to provide for a natural and transparent support of learning.
KEYWORDS: Visualization, Cardiovascular magnetic resonance imaging, Heart, Volume rendering, Data modeling, Magnetic resonance imaging, Data acquisition, Tissues, 3D modeling, Cardiovascular system
Cardiac MRI is a technique that provides information about morphology and function of the cardiovascular system in the form of four-dimensional (4D) scalar data sets. Visualization and extraction of clinically relevant parameters from these data sets may help to diagnose cardiac diseases and malfunctions. Some of these parameters are left (right) ventricle volume, ejection fraction, flow measurements, and wall motion and thickening. Although cardiac MRI is a rapidly growing technique, it must overcome several problems (such as poor spatial resolution, flow and motion artifacts, and low signal-to-noise ratio) in order to produce images with sufficient quality to be used in clinical applications. Existing approaches to visualize cardiac MRI data sets in 4D are based on rendering a geometrical model extracted from the data. In most cases, these models are polygon meshes describing the epicardial and endocardial surfaces of the heart. A wide range of different techniques can be found in the literature to achieve this geometrical model extraction. Our approach consists of applying an iso-surface volume rendering technique in order to visualize the data sets. This visualization includes shape visualization and functional mapping. With this technique, the medical data itself is rendered instead of rendering an extracted geometrical model. This technique has been successfully applied to 3D MRI and CT data sets. Even though the extension of this technique to 4D data sets is not straightforward, the preliminary results are very promising.
In our research program that aims to quantify the functional relevance of partly occluded coronary vessels, we need in one of the approaches to the problem, the 3D structure of the pertinent vessels. The use of standard biplane projection angiograms is limited by the ambiguity about the orientation not resolved by the two projections. In this paper we study to solve the orientation ambiguity based upon the geometrical unsharpness due to the focal spot of the X-ray tube. We describe the influence of the focal spot on the imaging MTF. We present the analysis of the biplane projection geometry based upon fan beam and focal spot. We derive the analytical equation of the MTF due to focal spot and geometrical magnification. We also analyze and indicate practical situations of coronaries from real angiograms.
Pijls and De Bruyne (1993) developed a method employing intravascular blood pressure gradients to calculate the Myocardial Fractional Flow Reserve (FFR). This flow reserve is a better indication of the functional severity of a coronary stenosis than percentage diameter or luminal area reduction as provided by traditional Quantitative Coronary Angiography (QCA). However, to use this method, all of the relevant artery segments have to be select intra-operatively. After the procedure, only the segments for which a pressure reading is available can be graded. We previously introduced another way to assess the functional severity of stenosis using angiographic projections: the Relative Coronary Flow Reserve (RCFR). It is based on standard densitometric blood velocity and flow reserve methods, but without the need to estimate the geometry of the artery. This paper demonstrates that this RCFR method yields -- in theory -- the same results as the FFR, and can be given an almost identical interpretation. This provides the opportunity to use the RCFR retrospectively, when pressure gradients are not available for the segment(s) of interest.
Our purpose is to assess the pulse flow propagation from series of digital coronary angiograms. The local dilation- contraction pattern along the vessel is a measure for the elasticity and endothelian function. A small distensibility could be an indication for the presence of atherosclerosis also in cases where the angiogram is not abnormal. We have developed an analytical model of the pulse flow propagation in coronary arteries. In the model the artery is a straight elastic tube and is not moving due to the motion of the heart. The pulsatile flow of contrast agent is modeled for clinically relevant parameters and predictions of coronary angiograms are obtained for various characteristics of elasticity such as modulus, compliance and ratio. In the clinical angiograms, we compute the local vessel diameter from frame to frame. Because of the heart motion it is not so easy to track the vessel diameter on the same spot. Motion estimation and compensation are required. Algorithms for these processing steps are implemented. We have obtained satisfactory model simulations and predictions of angiograms. The simulated-dilation- contraction patterns help to understand the more complicated clinical angiograms. We have obtained various pulse flow patterns from coronary angiograms of a small patient population.
The long term goal of this research is to determine the clinical relevance of stenosis. Where most QCA algorithms calculate the decrease in lumen from one angiocardiogram, we seek to determine directly the influence of the stenosis on the blood flow. The method uses only a slightly different clinical approach as compared to 'traditional' non- interventional catheterizations. Instead of injecting a steady flow of contrast agent, we propose to inject a string of small droplets. The resulting string of droplets will enable us to estimate the relative blood flow by measuring their time of arrival in some designated regions. Repeating the same procedure after administering a vasodilative drug, we obtain a relative decrease (or less increase) in blood flow in one of the two distal branches of the bifurcation due to the presence of stenosis. From the resulting X-ray image sequence multiple frames are selected, and the information is combined to find the relative blood velocity. The conclusion is that it is possible to use sequences of images instead of just one image to calculate quantitative results. Major problems to overcome are the respiratory- and heart-motions, and differences in acquisition parameters between runs. The usefulness of the new method in real clinical applications and the coherence with other measures are currently under trial.
In reference 1 we have presented the principle of an X-ray detector based upon a screen coupled to an array of multiple CCD sensors. In reference 2 we focus on the characterization of the image quality: resolution (MTF) and noise behavior in the overlap area. Simple (and cheap) low F# lenses likely show distortion which means that not all imaged pixels have the same magnification. This may affect resolution. Lenses with (some) barrel distortion have the benefit of less vignetting. The correction of distortion in combination with a rotation adjustment requires interpolation. Interpolation affects the noise properties so care must be taken in order to avoid that the noise characterization of the reconstructed image mosaic i.e. the noise texture becomes spatially non uniform. We present an analysis of the influence of lens distortion and interpolation in cases of small rotation correction on the image mosaic. The image processing appears not to diminish the image quality provided the processing parameters are set correctly. The calibration of the imaging mosaic geometry is crucial. We therefore present a robust extraction algorithm. In this paper our main interest is on MTF and quantum noise properties. The lab prototype hardware is designed such (cubic spline interpolation) that also the lens distortion can be compensated. For this purpose ASICs are designed by the company AEMICS. This enables relative cheap optical components with low F# and a short building length. We have obtained and will present radiographic exposures of static phantoms.
In this paper we present a method that characterizes a certain tissue class by the shape of the MR signal intensity versus time course obtained from the dynamic series images following a Gadolinium-DTPA bolus injection. This characterization is based on a pharmacokinetic model of the perfusion and leakage of contrast agent in the tissue. An objective classification of malignancy using a priori information is based on matching the actual enhancement time course of each pixel to reference time courses. Eigenimage filtering using characteristic time courses as feature vectors is proposed as an approach to reduce the dynamic series of images to a single image in which pixels with a close match to a particular feature are enhanced. This single image can be used as a mask to obtain homogenous regions for parameterization using the pharmacokinetic model. A newly developed algorithm enables the creation of training sets of standard time courses from dynamic series images of lesions with known histology in an objective way. An automatic segmentation of a new patient scan without user interaction is obtained using the training sets as feature vectors in the eigenimage filter.
KEYWORDS: Signal processing, Digital signal processing, CRTs, Filtering (signal processing), Optical filters, Video, Electronic filtering, Modulation transfer functions, Electron beams, Digital filtering
In this paper we address the correction of the convergence error of a monochrome multi-beam Cathode Ray Tube (CRT) by means of digital video-signal processing. Correction of the convergence (horizontal misalignment) of the three electron beams with respect to each other in the CRT, will improve the resolution and brightness of the CRT. We apply the theory of fractional delay filtering to design a digital Finite Impulse Response (FIR) filter that is capable of interpolating the digital video signal. Emphasis is on small four-tap filters, to reduce the necessary amount of processing power. The variable filter has been implemented on a TMS320C80 signal processor to assess the performance of standard DSP hardware on this type of filtering. The analysis of four-tap fractional delay filters has led to useful designs in our application. The implementation on the TMS320C80 processor shows that the variable filter can be implemented with about 10 (parallel) instructions, yielding a maximum throughput of 16 M pixels/s on the TMS320C80 DSP at 40 MHz. We demonstrate (results of) a real-time DSP (TMS320C80) implementation of a variable delay video processing for horizontal convergence correction. The image quality, MTF and brightness are quite satisfactory and well in the diagnostic application area.
Information about local diameter variations as a response to the pulse flow in the human coronary arteries may indicate the development of atherosclerosis before this can be seen as a stenosis on coronary arteries may indicate the development of atherosclerosis before this can be seen as a stenosis on coronary angiograms. This paper describes the design of an image processing tool to measure this diameter variation from a sequence of digital coronary angiograms. If a blood vessel responds less elastically to the pulse flow, this may be an indication of atherosclerosis in an early stage. We have developed an image analysis and processing algorithm which is able after vessel segment selection by the user, to calculate automatically the vessel diameter variations from a standard sequence of digital angiograms. Several problems are treated. The periodic motion of the vessel segment in the consecutive frames is taken into account by tracking the vessel segment using a 2D logarithmic search to find the minimum in the mean absolute distance. A robust artery tracing algorithm has been implemented using graph searching techniques. The local diameter is determined by first resampling the image perpendicular to the found trace and afterwards performing edge detection using the Laplacian operator. This is repeated for all frames to show the local diameter variation of the artery segment as a function of time.
KEYWORDS: Image compression, Interference (communication), Signal to noise ratio, Photons, X-ray imaging, Diagnostics, Imaging systems, Modulation transfer functions, X-rays, Video
In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.
Many techniques for image compression do exist and are well described in the literature. Lossless image compression is for digital coronary angiograms limited to compression ratios in the order of 3-4. THe purpose of this work is about the assessment of the diagnostic image quality of lossy compressed coronary angiograms by means of quantitative coronary angiography (QCA). We measure in the compressed images the diameter of the vessel at several places as a function of the compression ratio and compare this with the original image. The set of representative images is compressed with the ratios 4, 8, 12 and 16. The selected compression algorithms are JPEG, lapped orthogonal transform (LOT) and modified fast lapped transform (MFLT). The obtained quantitative diameter values start to deviate at image representations down at 0.5 bit per pixel with the JPEG giving the greatest differences. The results of LOT and MFLT are performing better with respect to the criterion vessel size of diagnostic relevant vessels. At the greater compression ratios some blocking artifacts or ringing starts to become visible. Somewhat to our surprise in our comparison study we have found that there are no great deviations in measured vessel diameter for the compression ratios 4, 8, 12 and 16. At compression ratio 16 JPEG has the largest deviation. According to the changes in the quantitative data higher compression ratios are certainly feasible.
We have presented the principle of an x-ray detector based upon a screen coupled to an array of multiple CCD sensors. We now focus on the characterization of the image quality: resolution (MTF) and noise behavior in the overlap area. Simple low F lenses likely show distortion which means that not all imaged pixels have the same magnification. This may affect resolution. In the overlap area the image is reconstructed by interpolation between two sensors. Interpolation affects the noise properties so care must be taken in order to avoid that the noise characterization of the reconstructed image mosaic becomes spatially non uniform.We present an analysis of the influence of lens distortion and interpolation in the overlap area on the image mosaic. The image processing appears not to diminish the image quality provided the processing parameters are set correctly. We therefore present a robust extraction algorithm. In order to evaluate in real-time the image quality of the proposed detector system, we are building a 2 by 2 lens-CCD sensor system as a lab prototype. The main interest is on MTF and quantum noise properties. The hardware is designed such that also the lens distortion can be compensated. This enables relative cheap optical components with low F and a short building length. We have obtained and will present radiographic exposures of static phantoms.
KEYWORDS: CRTs, Electron beams, Video, Modulation transfer functions, Visualization, Surgery, Medical imaging, Prototyping, Calibration, Computing systems
In the field of medical imaging there is a need for high-resolution high-brilliance monochromatic CRT displays. However, at higher brightness levels the resolution of these displays decreases, due to the increasing spot size. In order to improve the performance of the CRT display a relatively simple method, called the multi-beam concept, is introduced. Using this technique a higher brightness can be realized without an increase of the spot size and therefore a better display quality can be achieved. However, for successful exploitation of the multi-beam concept it is necessary to minimize the convergence error of the CRT display. For this purpose two circuits have been realized, where the convergence error is reduced by an analogue and a digital method respectively. The analogue implementation means an improvement for the applicability of the multi-beam concept, however, in order to achieve major image quality improvement the use of the digital system is necessary.
In this contribution we propose an alternative x-ray detector based upon multiple screen-CCD sensor combinations. The impinging x-ray quanta are detected by a scintillator screen (e.g. CsI) and converted to light photons (typ. 1200 photons per absorbed x-ray quantum). We propose a number of lens-CCD sensors for standard video performance to detect the light photons coming out of an x-ray intensifying screen. Due to the smaller demagnification the coupling efficiency is better even with moderate quality (F number) lenses. We thus obtain a matrix of subimages, the system is constructed such that the subimages partially overlap. With digital image processing we construct from the subimages a single high quality image. Special hardware (incl. ASICS) has been developed for imaging at video rates, enabling (almost) fluoroscopy with this new detector. We show a viable digital x-ray imaging detector concept by means of our 2 by 2 CCD camera prototype and real-time processing engine. The image quality, MTF and noise properties are satisfactory and well in the diagnostic application range.
In this paper a novel algorithm is presented for the efficient 2D Least Squares FIR filtering and system identification. Filter masks of general boundaries are allowed. Efficient order updating recursions are developed by exploiting the spatial shift invariance property of the 2D data set. In contrast to the existing column (row)-wise 2D recursive schemes based on the Levinson-Wiggins-Robinson's multichannel algorithm, the proposed technique offers the greatest maneuverability in the 2D index space in a computational efficient way. This flexibility can be taken into advantage if the shape of the 2D mask is not a priori known and has to be dynamically configured. The recursive character of the algorithm allows for a continuous reshaping of the filter mask. Search for the optimal filter mask, essentially reconfigures the filter mask to achieve an optimal match. The optimum determination of the mask shape offers important advantages in 2D system modeling, filtering and image restorations.
In cardiology coronary stenoses are in most case diagnosed by subjective visual interpretation of coronary artery structures in which contingent stenoses are assessed in terms of percentage luminal area reduction. This results in large intra- and interobserver variability in readings. Besides, also the correlation between the anatomical severity of coronary stenoses and their physiological significance is rather poor. A far better indication for the functional severity of coronary stenoses is coronary flow reserve (CFR). Although good results with densitometric CFR methods have been reported, in clinical practice the current techniques are time consuming and difficult in procedure. This paper presents a less demanding approach to determine densitometrically the relative flow distribution between the two main branches of the left coronary artery. The hypotheses is that comparison of the flow distributions under basal and hyperemic conditions of the heart muscle will provide useful clinical information concerning the physiological relevance of coronary stenoses. The hypotheses is tested by means of in vitro flow experiments with a glass flow phantom representing the proximal part of the left coronary artery. From properly positioned regions of interest (ROIs) within a sequence of temporal digital images time-density curves has been extracted. It is investigated whether the center of gravity of the density curves is a useful parameter to calculate relative flow rate differences. The flow study results together with a discussion will be presented in this paper.
Sampling analog signals cause aliasing interference if the signal has frequency components higher than the folding frequency, i.e., half the sampling frequency. This distortion originates in the folding of these higher frequency components into the lower signal frequency spectrum with interference as a result. Usually aliasing artifacts are avoided by analog low-pass filtering of the signal prior to digitization. However, in the area of digitizing video signals from a CCD-based sensor such anti-alias filter is not feasible. The problem grows in importance due to increasing resolution requirements in many imaging applications pushing for CCD technology. This contribution reports the ongoing research to minimize the effects of two alias-based distortions, i.e., noise and moire patterns. In fluoroscopy, the amount of x-ray photons contributing to the image is restricted because of dose regulations. Quantum noise is clearly present in the images. The impinging white x-ray photons are spectrally shaped by the MTF of the imaging system. The resulting spectrum extends beyond the spatial Nyquist frequency of the CCD sensor. Aliased noise structures obscure diagnostic detail and, especially in real-time sequences, are annoying to look at. Another alias-based distortion is due to the anti-scatter grid, which is applied in order to reduce the number of scattered x-ray photons contributing to the image. Scattered photons give rise to a low-frequency blur of the images. An anti-scatter grid consists of a large number of parallel lead stripes separated by x- ray opaque material which are focussed at the x ray point source. The grid period is in the same order of magnitude as the CCD pixel size which causes moire pattern distortion in the images. In this contribution we discuss the restoration of both distortions. Aliased noise is minimized following a Wiener-type filtering approach. The moire pattern is attacked by inverse filtering. The analysis and simulations are presented, applications on medical images are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.