KEYWORDS: Machine learning, Tumors, Data analysis, Breast, Spatial resolution, Magnetic resonance imaging, Breast cancer, Computer aided diagnosis and therapy
Accurate methods for breast cancer diagnosis are of capital importance for selection and guidance of treatment and optimal patient outcomes. In dynamic contrast enhancing magnetic resonance imaging (DCE-MRI), the accurate differentiation of benign and malignant breast tumors that present as non-mass enhancing (NME) lesions is challenging, often resulting in unnecessary biopsies. Here we propose a new approach for the accurate diagnosis of such lesions with high resolution DCE-MRI by taking advantage of seven robust classification methods to discriminate between malignant and benign NME lesions using their dynamic curves at the voxel level, and test it in a manually delineated dataset. The tested approaches achieve a diagnostic accuracy up to 94% accuracy, sensitivity of 99 % and specificity of 90% respectively, with superiority of high temporal compared to high spatial resolution sequences.
18F-DMFP-PET is a neuroimaging modality that allows us to analyze the striatal dopamine. Thus, it is recently emerging as an effective tool to assist the diagnosis of Parkinsonism and differentiate among parkinsonian syndromes. However the analysis of these data, which require specific preprocessing methods, is still poorly covered. In this work we demonstrate a novel methodology based on Hidden Markov Random Fields (HMRF) and the Gaussian distribution to preprocess 18F-DMFP-PET data. First, we performed a selection of voxels based on the analysis of the histogram in order to remove low-signal regions and regions outside the brain. Specifically, we modeled the histogram of intensities of a neuroimage with a mixture of two Gaussians and then, using a HMRF algorithm the voxels corresponding to the low-intensity Gaussian were discarded. This procedure is similar to the tissue segmentation usually applied to Magnetic Resonance Imaging data. Secondly, the intensity of the selected voxels was scaled so that the Gaussian that models the histogram for each neuroimage has same mean and standard deviation. This step made comparable the data from different patients, without removing the characteristic patterns of each patient's disorder. The proposed approach was evaluated using a computer system based on statistical classification that separated the neuroimages according to the parkinsonian variant they represented. The proposed approach achieved higher accuracy rates than standard approaches for voxel selection (based on atlases) and intensity normalization (based on the global mean).
Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.
Wavelet transforms are becoming increasingly important as an image processing technology. Their efficient implementation using commercially available VLSI technology is a subject of continuous study and development. This paper presents the implementation using modern Altera APEX20K field-programmable logic (FPL) devices of reduced complexity and high performance wavelet architectures by means of the residue number system (RNS). The improvement is achieved by reducing arithmetic operations to modulo operations executed in parallel over small word-length channels. The systems are based on index arithmetic over Galois fields and the key for attaining low-complexity and high-throughput is an adequate selection of a small word-width modulus set. These systems are programmable in the sense that their coefficients can be reprogrammed in order to make them more suitable for most of the applications. FPL-efficient converters are also developed and the overhead of the input and output conversion is assessed. The design of a reduced complexity ε-CRT converter makes the conversion overhead of this kind of systems be not important for their practical implementation. The proposed structures are compared to traditional systems using 2’s complement arithmetic. With this and other innovations, the proposed architectures are about 65% faster than the 2’s complement designs and require fewer logic elements in most cases.
In this paper, a new parallel hardware architecture dedicated to compute the Gaussian Potential Function is proposed. This function is commonly utilized in neural radial basis classifiers for pattern recognition as described by Lee; Girosi and Poggio; and Musavi et al. Attention to a simplified Gaussian Potential Function which processes uncorrelated features is confined. Operations of most interest included by the Gaussian potential function are the exponential and the square function. Our hardware computes the exponential function and its exponent at the same time. The contributions of all features to the exponent are computed in parallel. This parallelism reduces computational delay in the output function. The duration does not depend on the number of features processed. Software and hardware case studies are presented to evaluate the new CORDIC.
FIR filters are routinely used in the implementation of modern digital signal processing systems, such as the discrete wavelet transform. Their efficient implementation using commercially available VLSI technology is a subject of continuous study and development. This paper presents the implementation using modern Altera APEX20K field-programmable logic (FPL) devices of reduced complexity and high performance FIR filters by means of the residue number system (RNS). Index arithmetic over Galois fields and the quadratic residue number systems (QRNS) together with a selection of a small wordwidth modulus set are the keys for attaining low-complexity and high-throughput in real and complex FIR filters. RNS-FPL merged FIR filters were about 65% faster than 2C designs and required fewer logic elements in most cases. An index arithmetic QRNS-based complex FIR filter yielded better results. This filter was up to 60% faster than the three-multiplier-per-tap filter and required fewer LEs for filters having more than 8 taps. Particularly, a 32-tap filter needed 24% LEs less than the classical design.
Ignacio Blanquer, Vincente Hernandez, Javier Ramirez, Antonio Vidal, Mariano Alcaniz-Raya, Vincente Grau Colomer, Carlos Monserrat, Luis Concepcion, Luis Marti-Bonmati
Clinics have to deal currently with hundreds of 3D images a day. The processing and visualization using currently affordable systems is very costly and slow. The present work shows the features of a software integrated parallel computing package developed at the Universidad Politecnica de Valencia (UPV), under the European Project HIPERCIR, which is aimed at reducing the time and requirements for processing and visualizing the 3D images with low-cost solutions, such as networks of PCs running standard operating systems. HIPERCIR is targeted to Radiology Departments of Hospitals and Radiology System Providers to provide them with a tool for easing the day-to-day diagnosis. This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Computing Systems Department of the UPV, experts on biomedical software and radiology and tomography clinic experts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.