KEYWORDS: Digital breast tomosynthesis, 3D image processing, Breast cancer, Clinical trials, Breast, 3D acquisition, X-rays, Tissues, Data acquisition, Digital mammography
Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of
computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital
mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including
DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large
numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the
entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data.
To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the
resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical
trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is
agnostic to the DBT reconstruction algorithm employed.
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in
creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of
classifying the image based on features like the complexity of the background, the visibility of the disease (lesions).
Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies.
This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize
image content of computer screen shots. With the implementation of new texture features and a defined breast density
scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS
(Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the
mammography reporting terminology and assessment and recommendation categories. Selected features are input into a
decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a
global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a
"strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for
one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are:
TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
During the European Cantata project (ITEA project, 2006-2009), a Multi-Content Analysis framework for the
classification of compound images in various categories (text, graphical user interface, medical images, other complex
images) was developed within Barco. The framework consists of six parts: a dataset, a feature selection method, a
machine learning based Multi-Content Analysis (MCA) algorithm, a Ground Truth, an evaluation module based on
metrics and a presentation module. This methodology was built on a cascade of decision tree-based classifiers combined
and trained with the AdaBoost meta-algorithm. In order to be able to train these classifiers on large training datasets
without excessively increasing the training time, various optimizations were implemented. These optimizations were
performed at two levels: the methodology itself (feature selection / elimination, dataset pre-computation) and the
decision-tree training algorithm (binary threshold search, dataset presorting and alternate splitting algorithm). These
optimizations have little or no negative impact on the classification performance of the resulting classifiers. As a result,
the training time of the classifiers was significantly reduced, mainly because the optimized decision-tree training
algorithm has a lower algorithmic complexity. The time saved through this optimized methodology was used to compare
the results of a greater number of different training parameters.
KEYWORDS: Image compression, Medical imaging, Image fusion, Quantization, Image quality, Chemical elements, RGB color model, Video, Matrices, Standards development
In medical networked applications, the server-generated application view, consisting of medical image content and
synthetic text/GUI elements, must be compressed and transmitted to the client. To adapt to the local content
characteristics, the application view is divided into rectangular patches, which are classified into content classes: medical
image patches, synthetic image patches consisting of text on a uniform/natural/medical image background and synthetic
image patches consisting of GUI elements on a uniform/natural/medical image background. Each patch is thereafter
compressed using a technique yielding perceptually optimal performance for the identified content class. The goal of this
paper is to identify this optimal technique, given a set of candidate schemes. For this purpose, a simulation framework is
used which simulates different types of compression and measures the perceived differences between the compressed
and original images, taking into account the display characteristics. In a first experiment, JPEG is used to code all
patches and the optimal chroma subsampling and quantization parameters are derived for different content classes. The
results show that 4:4:4 chroma subsampling is the best choice, regardless of the content type. Furthermore, frequency
dependant quantization yields better compression performance than uniform quantization, except for content containing a
significant number of very sharp edges. In a second experiment, each patch can be coded using JPEG, JPEG XR or JPEG
2000. On average, JPEG 2000 outperforms JPEG and JPEG XR for most medical images and for patches containing text.
However, for histopathology or tissue patches and for patches containing GUI elements, classical JPEG compression
outperforms the other two techniques.
In the context of the European Cantata project (ITEA project, 2006-2009), within Barco, a complete Multi-Content Analysis framework was developed for detection and analysis of compound images. The framework consists of: a dataset, a Multi-Content Analysis (MCA) algorithm based on learning approaches, a Ground Truth, an evaluation module based on metrics and a presentation module. The aim of the MCA methodology presented here is to classify image content of computer screenshots into different categories such as: text; Graphical User Interface; Medical images and other complex images. The AdaBoost meta-algorithm was chosen, implemented and optimized for the classification method as it fitted the constraints (real-time and precision). A large dataset separated in training and testing subsets and their ground truth (with ViPER metadata format) was both collected and generated for the four different categories. The outcome of the MCA is a cascade of strong classifiers trained and tested on the different subsets. The obtained framework and its optimization (binary search, pre-computing of the features, pre-sorting) allow to re-train the classifiers as much as needed. The preliminary results are quite encouraging with a low false positive rate and close true positive rate in comparison with expectations. The re-injection of false negative examples from new testing subsets in the training phase resulted in better performances of the MCA.
Rationale and Objective: Due to the limited temporal and spatial resolution, coronary CT angiographic image quality is
not optimal for robust and accurate stenosis quantification, and plaque differentiation and quantification. By combining
the high-resolution IVUS images with CT images, a detailed representation of the coronary arteries can be provided in
the CT images. Methods: The two vessel data sets are matched using three steps. First, vessel segments are matched
using anatomical landmarks. Second, the landmarks are aligned in cross-sectional vessel images. Third, the semi-automatically
detected IVUS lumen contours are matched to the CTA data, using manual interaction and automatic
registration methods. Results: The IVUS-CTA fusion tool facilitates the unique combined view of the high-resolution
IVUS segmentation of the outer vessel wall and lumen-intima transitions on the CT images. The cylindrical projection of
the CMPR image decreases the analysis time with 50 percent. The automatic registration of the cross-vessel views
decreases the analyses time with 85 percent. Conclusions: The fusion of IVUS images and their segmentation results
with coronary CT angiographic images provide a detailed view of the lumen and vessel wall of coronary arteries. The
automatic fusion tool makes such a registration feasible for the development and validation of analysis tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.