Glioblastoma multiforme (GBM) is the largest and most genetically and phenotypically heterogeneous category of primary brain tumors. Numerous novel chemical, targeted molecular and immune-active therapies in trial produce promising responses in a small disparate subset of patients but which patient will respond to which therapy remains unpredictable. Reliable imaging biomarkers for prediction and early detection of treatment response and survival are critical needs in neuro-oncology. In this study, brain tumor MRI 'deep features' extracted via transfer learning techniques were combined with features derived from an explicitly designed radiomics model to search for MRI markers predictive of overall survival (OS) in GBM patients. Two pre-trained convolutional neural network (CNN) models were utilized as the deep learning models and the elastic net-Cox model was performed to distinguish GBM patients into two survival groups. Two patient cohorts were included in this study. One was 50 GBM patients from our hospital and the other was 128 GBM patients from the Cancer Genome Atlas (TCGA) and the Cancer Image Archive (TCIA). The combined feature framework was predictive of OS in both data set with log-rank test p-value < 0.05 and may merit further study for reproducible prediction of treatment response.
An important challenge to using fluorodeoxyglucose-positron emission tomography (FDG-PET) in clinical trials of brain tumor patients is to identify malignant regions whose metabolic activity shows significant changes between pretreatment and a posttreatment scans in the presence of high normal brain background metabolism. This paper describes a semiautomated processing and analysis pipeline that is able to detect such changes objectively with a given false detection rate. Image registration and voxelwise comparison of the pre- and posttreatment images were performed. A key step is adjustment of the observed difference by the estimated background change at each voxel, thereby overcoming the confounding effect of spatially heterogeneous metabolic activity in the brain. Components of the proposed method were validated via phantom experiments and computer simulations. It achieves a false response volume accuracy of 0.4% at a significance threshold of 3 standard deviations. It is shown that the proposed methodology can detect lesion response with 100% accuracy with a tumor-to-background-ratio as low as 1.5, and it is not affected by the background brain glucose metabolism change. We also applied the method to FDG-PET patient images from a clinical trial to assess treatment effects of lapatinib, which demonstrated significant changes in metabolism corresponding to tumor regions.
The classification of walnuts shell and meat has a potential application in industry walnuts processing. A dark-field illumination method is proposed for the inspection of walnuts. Experiments show that the dark-field illuminated images of walnut shell and meat have distinct text patterns due to the differences in the light transmittance property of each. A number of rotation invariant feature analysis methods are used to characterize and discriminate the unique texture patterns. These methods include local binary pattern operator, wavelet analysis, circular Gabor filters, circularly symmetric gray level co-occurrence matrix and the histogram-related features. A recursive feature elimination method (SVM-RFE), is used to remove uncorrelated and redundant features and to train the SVM classifier at the same time. Experiments show that, by using only the top six ranked features, an average classification accuracy of 99.2% can be achieved.
A combined laser 3D and X-ray imaging system is newly developed for food safety inspection. Two kinds of cameras are used in this system. One is CCD camera which is used to provide an accurate thickness profile of the object and the other is X-ray line-scan camera which is to get the high resolution X-ray image. A unique three-step calibration procedure is proposed to calibrate these two kinds of cameras. Firstly, the CCD camera is calibrated to link the CCD pixels to points in 3D world coordinate system. Secondly, the X-ray line-scan camera is calibrated to link points in 3D world coordinate system to the X-ray line sensors. The X-ray fan beam effect is also compensated in this stage. Finally, direct mapping from CCD pixel to X-ray line sensor is realized using the information from the first two calibration steps. Based on the calibration results, look-up tables are also generated to replace the expensive runtime computation with simpler lookup operation. Results show that high accuracy has been achieved with the whole system calibration.
Orthogonal moments have been successfully used in the field of pattern recognition and image analysis. However, due to the complexity in their calculation, the problem of fast computation of orthogonal moments has not till now been well solved. This paper presents two fast and efficient algorithms for the two dimensional (2D) Legendre moment computation. They are based on a block representation of the image and respectively use cumulative and integral methods. Results on 2D binary images show that these algorithms can decrease the computational complexity in a very important way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.