PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9791, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Physics-based-theoretical models have been used to predict developmental patterning processes such as branching morphogenesis for over half a century. While such techniques are quite successful in understanding the patterning processes in organs such as the lung and the kidney, they are unable to accurately model the processes in other organs such as the submandibular salivary gland. One possible reason is the detachment of these models from data that describe the underlying biological process. This hypothesis coupled with the increasing availability of high quality data has made discrete, data-driven models attractive alternatives. These models are based on extracting features from data to describe the patterns and their time evolving multivariate statistics. These discrete models have low computational complexity and comparable or better accuracy than the continuous models. This paper presents a case study for coupling continuous-physics-based and discrete-empirical-models to address the prediction of cleft formation during the early stages of branching morphogenesis in mouse submandibular salivary glands (SMG). Given a time-lapse movie of a growing SMG, first we build a descriptive model that captures the underlying biological process and quantifies this ground truth. Tissue-scale (global) morphological features are used to characterize the biological ground truth. Second, we formulate a predictive model using the level-set method that simulates branching morphogenesis. This model successfully predicts the topological evolution, however, it is blind to the cellular organization, and cell-to-cell interactions occurring inside a gland; information that is available in the image data. Our primary objective via this study is to couple the continuous level set model with a discrete graph theory model that captures the cellular organization but ignores the forces that determine the evolution of the gland surface, i.e. formation of clefts and buds. We compared the prediction accuracy of our model to an on-lattice Monte-Carlo simulation model which has been used extensively for modeling morphogenesis and organogenesis. The results demonstrate that the coupled model yields comparable simulations of gland growth to that of the Monte-Carlo simulation model with a significantly lower computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been shown that the tumour microenvironment plays a crucial role in regulating tumour progression by a number of different mechanisms, including the remodeling of collagen fibres in tumour-associated stroma. It is still unclear, however, if these stromal changes are of benefit to the host or the tumour. We hypothesise that stromal maturity is an important reflection of tumour biology, and thus can be used to predict prognosis. The aim of this study is to develop a texture analysis methodology which will automatically classify stromal regions from images of hematoxylin and eosin-stained (H and E) sections into two categories: mature and immature. Subsequently we will investigate whether stromal maturity could be used as a predictor of survival and also as a means to better understand the relationship between the radiological imaging signal and the underlying tissue microstructure. We present initial results for 118 regions-of-interest from a dataset of 39 patients diagnosed with invasive breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. The primary conundrum in treatment and management of early stage ER+ BCa is identifying which of these cancers are candidates for adjuvant chemotherapy and which patients will respond to hormonal therapy alone. This decision could spare some patients the inherent toxicity associated with adjuvant chemotherapy. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these gene expression tests tend to be expensive, tissue destructive and require physical shipping of tissue blocks for the test to be done. Interestingly breast cancer grade in these tumors has been shown to be highly correlated with the ODX risk score. Unfortunately studies have shown that Bloom-Richardson (BR) grade determined by pathologists can be highly variable. One of the constituent categories in BR grading is the quantification of tubules. The goal of this study was to develop a deep learning neural network classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei would correlate with the corresponding ODX risk categories. The performance of the tubule nuclei deep learning strategy was evaluated with a set of 61 high power fields. Under a 5-fold cross-validation, the average precision and recall measures were 0:72 and 0:56 respectively. In addition, the correlation with the ODX risk score was assessed in a set of 7513 high power fields extracted from 174 WSI, each from a different patient (At most 50 high power fields per patient study were used). The ratio between the number of tubule and non-tubule nuclei was computed for each WSI. The results suggests that for BCa cases with both low ODX score and low BR grade, the mean tubule nuclei ratio was significantly higher than that obtained for the BCa cases with both high ODX score and high BR grade (p < 0:01). The low ODX and low BR grade cases also presented a significantly higher average tubule nuclei ratio when compared with the rest of the BCa cases (p < 0:05). Finally, the BCa cases that presented both a high ODX and high BR grade show a mean tubule nuclei to total number of nuclei ratio which was significantly smaller than that obtained for the rest of BCa cases (p < 0:01).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the “nuclear to cytoplasmic ratio” by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were combined for each slide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intraoperative neuropathology of glioma recurrence represents significant visual challenges to pathologists as they carry significant clinical implications. For example, rendering a diagnosis of recurrent glioma can help the surgeon decide to perform more aggressive resection if surgically appropriate. In addition, the success of recent clinical trials for intraoperative administration of therapies, such as inoculation with oncolytic viruses, may suggest that refinement of the intraoperative diagnosis during neurosurgery is an emerging need for pathologists. Typically, these diagnoses require rapid/STAT processing lasting only 20-30 minutes after receipt from neurosurgery. In this relatively short time frame, only dyes, such as hematoxylin and eosin (H and E), can be implemented. The visual challenge lies in the fact that these patients have undergone chemotherapy and radiation, both of which induce cytological atypia in astrocytes, and pathologists are unable to implement helpful biomarkers in their diagnoses. Therefore, there is a need to help pathologists differentiate between astrocytes that are cytologically atypical due to treatment versus infiltrating, recurrent, neoplastic astrocytes. This study focuses on classification of neoplastic versus non-neoplastic astrocytes with the long term goal of providing a better neuropathological computer-aided consultation via classification of cells into reactive gliosis versus recurrent glioma. We present a method to detect cells in H and E stained digitized slides of intraoperative cytologic preparations. The method uses a combination of the ‘value’ component of the HSV color space and ‘b*’ component of the CIE L*a*b* color space to create an enhanced image that suppresses the background while revealing cells on an image. A composite image is formed based on the morphological closing of the hue-luminance combined image. Geometrical and textural features extracted from Discrete Wavelet Frames and combined to classify cells into neoplastic and non-neoplastic categories. Experimental results show that there is a strong consensus between the proposed method’s cell detection markings with those of the pathologist’s. Experiments on 48 images from six patients resulted in F1-score as high as 87.48%, 88.08% and 86.12% for Reader 1, Reader 2 and the reader consensus, respectively. Classification results showed that for both readers, binary classification tree and support vector machine performed the best with F1-scores ranging 0.92 to 0.94.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual characterization of histologic specimens is known to suffer from intra- and inter-observer variability. To help address this, we developed an automated framework for characterizing digitized histology specimens based on a novel application of color histogram and color texture analysis. We perform a preliminary evaluation of this framework using a set of 73 trichrome-stained, digitized slides of normal breast tissue which were visually assessed by an expert pathologist in terms of the percentage of collagenous stroma, stromal collagen density, duct-lobular unit density and the presence of elastosis. For each slide, our algorithm automatically segments the tissue region based on the lightness channel in CIELAB colorspace. Within each tissue region, a color histogram feature vector is extracted using a common color palette for trichrome images generated with a previously described method. Then, using a whole-slide, lattice-based methodology, color texture maps are generated using a set of color co-occurrence matrix statistics: contrast, correlation, energy and homogeneity. The extracted features sets are compared to the visually assessed tissue characteristics. Overall, the extracted texture features have high correlations to both the percentage of collagenous stroma (r=0.95, p<0.001) and duct-lobular unit density (r=0.71, p<0.001) seen in the tissue samples, and several individual features were associated with either collagen density and/or the presence of elastosis (p≤0.05). This suggests that the proposed framework has promise as a means to quantitatively extract descriptors reflecting tissue-level characteristics and thus could be useful in detecting and characterizing histological processes in digitized histology specimens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The grading of neuroendocrine tumors of the digestive system is dependent on accurate and reproducible assessment of the proliferation with the tumor, either by counting mitotic figures or counting Ki-67 positive nuclei. At the moment, most pathologists manually identify the hotspots, a practice which is tedious and irreproducible. To better help pathologists, we present an automatic method to detect all potential hotspots in neuroendocrine tumors of the digestive system. The method starts by segmenting Ki-67 positive nuclei by entropy based thresholding, followed by detection of centroids for all Ki-67 positive nuclei. Based on geodesic distance, approximated by the nuclei centroids, we compute two maps: an amoeba map and a weighted amoeba map. These maps are later combined to generate the heat map, the segmentation of which results in the hotspots. The method was trained on three and tested on nine whole slide images of neuroendocrine tumors. When evaluated by two expert pathologists, the method reached an accuracy of 92.6%. The current method does not discriminate between tumor, stromal and inflammatory nuclei. The results show that α-shape maps may represent how hotspots are perceived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whole-mount pathology imaging has the potential to revolutionize clinical practice by preserving context lost when tissue is cut to fit onto conventional slides. Whole-mount digital images are very large, ranging from 4GB to greater than 50GB, making concurrent processing infeasible. Block-processing is a method commonly used to divide the image into smaller blocks and process them individually. This approach is useful for certain tasks, but leads to over-counting objects located on the seams between blocks. This issue is exaggerated as the block size decreases. In this work we apply a novel technique to enumerate vessels, a clinical task that would benefit from automation in whole-mount images. Whole-mount sections of rabbit VX2 tumors were digitized. Color thresholding was used to segment the brown CD31- DAB stained vessels. This vessel enumeration was applied to the entire whole-mount image in two distinct phases of block-processing. The first (whole-processing) phase used a basic grid and only counted objects that did not intersect the block’s borders. The second (seam-processing) phase used a shifted grid to ensure all blocks captured the block-seam regions from the original grid. Only objects touching this seam-intersection were counted. For validation, segmented vessels were randomly embedded into a whole-mount image. The technique was tested on the image using 24 different block-widths. Results indicated that the error reaches a minimum at a block-width equal to the maximum vessel length, with no improvement as the block-width increases further. Object-density maps showed very good correlation between the vessel-dense regions and the pathologist outlined tumor regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for the automatic segmentation of vascular structures in stacks of serial sections. It was initially motivated within the Virtual Liver Network research project that aims at creating a multi-scale virtual model of the liver. For this the vascular systems of several murine livers under different conditions need to be analyzed. To get highly detailed datasets, stacks of serial sections of the whole organs are prepared. Due to the huge amount of image data an automatic approach for segmenting the vessels is required. After registering the slides with an established method we use a set of Random Forest classifiers to distinguish vessels from tissue. Instead of a pixel-wise approach we perform the classification on small regions. This allows us to use more meaningful features. Besides basic intensity and texture features we introduce the concept of context features, which allow the classifiers to also consider the neighborhood of a region. Classification is performed in two stages. In the second stage the previous classification result of a region and its neighbors is used to refine the decision for a particular region. The context features and two stage classification process make our method very successful. It can handle different stainings and also detect vessels in which residue like blood cells remained. The specificity reaches 95%-99% for pure tissue, depending on staining and zoom level. Only in the direct vicinity of vessels the specificity declines to 88%-96%. The sensitivity rates reach between 89% and 98%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The branch of pathology concerned with excess blood serum proteins being excreted in the urine pays particular attention to the glomerulus, a small intertwined bunch of capillaries located at the beginning of the nephron. Normal glomeruli allow moderate amount of blood proteins to be filtered; proteinuric glomeruli allow large amount of blood proteins to be filtered. Diagnosis of proteinuric diseases requires time intensive manual examination of the structural compartments of the glomerulus from renal biopsies. Pathological examination includes cellularity of individual compartments, Bowman’s and luminal space segmentation, cellular morphology, glomerular volume, capillary morphology, and more. Long examination times may lead to increased diagnosis time and/or lead to reduced precision of the diagnostic process. Automatic quantification holds strong potential to reduce renal diagnostic time. We have developed a computational pipeline capable of automatically segmenting relevant features from renal biopsies. Our method first segments glomerular compartments from renal biopsies by isolating regions with high nuclear density. Gabor texture segmentation is used to accurately define glomerular boundaries. Bowman’s and luminal spaces are segmented using morphological operators. Nuclei structures are segmented using color deconvolution, morphological processing, and bottleneck detection. Average computation time of feature extraction for a typical biopsy, comprising of ~12 glomeruli, is ∼69 s using an Intel(R) Core(TM) i7-4790 CPU, and is ~65X faster than manual processing. Using images from rat renal tissue samples, automatic glomerular structural feature estimation was reproducibly demonstrated for 15 biopsy images, which contained 148 individual glomeruli images. The proposed method holds immense potential to enhance information available while making clinical diagnoses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis and morphological comparison of arteriolar and venular networks are essential to our understanding of multiple diseases affecting every organ system. We have developed and evaluated the first fully automatic software system for differentiation of arterioles from venules on high-resolution digital histology images of the mouse hind limb immunostained for smooth muscle α-actin. Classifiers trained on texture and morphologic features by supervised machine learning provided excellent classification accuracy for differentiation of arterioles and venules, achieving an area under the receiver operating characteristic curve of 0.90 and balanced false-positive and false-negative rates. Feature selection was consistent across cross-validation iterations, and a small set of three features was required to achieve the reported performance, suggesting potential generalizability of the system. This system eliminates the need for laborious manual classification of the hundreds of microvessels occurring in a typical sample, and paves the way for high-throughput analysis the arteriolar and venular networks in the mouse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative histomorphometry (QH) is the process of computerized extraction of features from digitized tissue slide images. Typically these features are used in machine learning classifiers to predict disease presence, behavior and outcome. Successful robust classifiers require features that both discriminate between classes of interest and are stable across data from multiple sites. Feature stability may be compromised by variation in slide staining and scanning procedures. These laboratory specific variables include dye batch, slice thickness and the whole slide scanner used to digitize the slide. The key therefore is to be able to identify features that are not only discriminating between the classes of interest (e.g. cancer and non-cancer or biochemical recurrence and non- recurrence) but also features that will not wildly fluctuate on slides representing the same tissue class but from across multiple different labs and sites. While there has been some recent efforts at understanding feature stability in the context of radiomics applications (i.e. feature analysis of radiographic images), relatively few attempts have been made at studying the trade-off between feature stability and discriminability for histomorphometric and digital pathology applications. In this paper we present two new measures, preparation-induced instability score (PI) and latent instability score (LI), to quantify feature instability across and within datasets. Dividing PI by LI yields a ratio for how often a feature for a specific tissue class (e.g. low grade prostate cancer) is different between datasets from different sites versus what would be expected from random chance alone. Using this ratio we seek to quantify feature vulnerability to variations in slide preparation and digitization. Since our goal is to identify stable QH features we evaluate these features for their stability and thus inclusion in machine learning based classifiers in a use case involving prostate cancer. Specifically we examine QH features which may predict 5 year biochemical recurrence for prostate cancer patients who have undergone radical prostatectomy from digital slide images of surgically excised tissue specimens, 5 year biochemical recurrence being a strong predictor of disease recurrence. In this study we evaluated the ability of our feature robustness indices to identify the most stable and predictive features of 5 year biochemical recurrence using digitized slide images of surgically excised prostate cancer specimens from 80 different patients across 4 different sites. A total of 242 features from 5 different feature families were investigated to identify the most stable QH features from our set. Our feature robustness indices (PI and LI) suggested that five feature families (graph, shape, co-occurring gland tensors, gland sub-graphs, texture) were susceptible to variations in slide preparation and digitization across various sites. The family least affected was shape features in which 19.3% of features varied across laboratories while the most vulnerable family, at 55.6%, was the gland disorder features. However the disorder features were the most stable within datasets being different between random halves of a dataset in an average of just 4.1% of comparisons while texture features were the most unstable being different at a rate of 4.7%. We also compared feature stability across two datasets before and after color normalization. Color normalization decreased feature stability with 8% and 34% of features different between the two datasets in two outcome groups prior to normalization and 49% and 51% different afterwards. Our results appear to suggest that evaluation of QH features across multiple sites needs to be undertaken to assess robustness and class discriminability alone should not represent the benchmark for selection of QH features to build diagnostic and prognostic digital pathology classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The morphology of intestinal glands is an important and significant indicator of the level of the severity of an inflammatory bowel disease, and has also been used routinely by pathologists to evaluate the malignancy and the prognosis of colorectal cancers such as adenocarcinomas. The extraction of meaningful information describing the morphology of glands relies on an accurate segmentation method. In this work, we propose a novel technique based on mathematical morphology that characterizes the spatial positioning of nuclei for intestinal gland segmentation in histopathological images. According to their appearance, glands can be divided into two types: hallow glands and solid glands. Hallow glands are composed of lumen and/or goblet cells cytoplasm, or filled with abscess in some advanced stages of the disease, while solid glands are composed of bunches of cells clustered together and can also be filled with necrotic debris. Given this scheme, an efficient characterization of the spatial distribution of cells is sufficient to carry out the segmentation. In this approach, hallow glands are first identified as regions empty of nuclei and surrounded by thick layers of epithelial cells, then solid glands are identified by detecting regions crowded of nuclei. First, cell nuclei are identified by color classification. Then, morphological maps are generated by the mean of advanced morphological operators applied to nuclei objects in order to interpret their spatial distribution and properties to identify candidates for glands central-regions and epithelial layers that are combined to extract the glandular structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A miniature objective designed for digital detection of Mycobacterium tuberculosis (MTB) was evaluated for diagnostic accuracy. The objective was designed for array microscopy, but fabricated and evaluated at this stage of development as a single objective. The counts and diagnoses of patient samples were directly compared for digital detection and standard microscopy. The results were found to be correlated and highly concordant. The evaluation of this lens by direct comparison to standard fluorescence sputum smear microscopy presented unique challenges and led to some new insights in the role played by the system parameters of the microscope. The design parameters and how they were developed are reviewed in light of these results. New system parameters are proposed with the goal of easing the challenges of evaluating the miniature objective and maintaining the optical performance that produced the agreeable results presented without over-optimizing. A new design is presented that meets and exceeds these criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a chemically-induced oral cancer animal model and a computer aided method for tongue cancer diagnosis. The animal model allows us to monitor the progress of the lesions over time. Tongue tissue dissected from mice was sent for histological processing. Representative areas of hematoxylin and eosin stained tissue from tongue sections were captured for classifying tumor and non-tumor tissue. The image set used in this paper consisted of 214 color images (114 tumor and 100 normal tissue samples). A total of 738 color, texture, morphometry and topology features were extracted from the histological images. The combination of image features from epithelium tissue and its constituent nuclei and cytoplasm has been demonstrated to improve the classification results. With ten iteration nested cross validation, the method achieved an average sensitivity of 96.5% and a specificity of 99% for tongue cancer detection. The next step of this research is to apply this approach to human tissue for computer aided diagnosis of tongue cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Properties of the microvasculature that contribute to tissue perfusion can be assessed using immunohistochemistry on 2D histology sections. However, the vasculature is inherently 3D and the ability to measure and visualize the vessel wall components in 3D will aid in detecting focal pathologies. Our objectives were (1) to develop a method for 3D measurement and visualization of microvasculature in 3D, (2) to compare the normal and regenerated post-ischemia mouse hind limb microvasculature, and (3) to compare the 2D and 3D vessel morphology measures. Vessels were stained for smooth muscle using 3,3'-Diaminobenzidine (DAB) immunostain for both normal (n = 6 mice) and regenerated vasculature (n = 5 mice). 2D vessel segmentations were reconstructed into 3D using landmark based registration. No substantial bias was found in the 2D measurements relative to 3D, but larger differences were observed for individual vessels oriented non-orthogonally to the plane of sectioning. A larger value of area, perimeter, and vessel wall thickness was found in the normal vasculature as compared to the regenerated vasculature, for both the 2D and 3D measurements (p < 0.01). Aggregated 2D measurements are sufficient for identifying morphological differences between groups of mice; however, one must interpret individual 2D measurements with caution if the vessel centerline direction is unknown. Visualization of 3D measurements permits the detection of localized vessel morphology aberrations that are not revealed by 2D measurements. With vascular measure visualization methodologies in 3D, we are now capable of locating focal pathologies on a whole slide level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing a single-pixel hyperspectral imaging system based on compressive sensing that acquires spatial and spectral information simultaneously. Our spectral imaging system uses autofluorescencent emission from collagen (400 nm) and NAD(P)H (475 nm), as well as, differences in the optical reflectance spectra as diagnostics for differentiating between healthy and diseased tissue. In this study, we demonstrate the ability of our imaging system to discriminate between healthy and damaged porcine epidermal tissue. Healthy porcine epidermal tissue samples (n=11) were imaged ex vivo using our hyperspectral system. The amount of NAD(P)H emission and the reflectance properties were approximately constant across the surface of healthy tissue samples. The tissue samples were then thermally damaged using an 1850 nm thulium fiber laser and re-imaged after laser irradiation. The damaged regions were clearly visible in the hyperspectral images as the thermal damage altered the fluorescent emission of NAD(P)H and changed the scattering properties of the tissue. The extent of the damaged regions was determined based on the hyperspectral images and these estimates were compared to damage extents measured in white light images acquired with a traditional camera. The extent of damage determined via hyperspectral imaging was in good agreement with estimates based on white light imaging indicating that our system is capable of differentiating between healthy and damaged tissue. Possible applications of our single pixel hyperspectral imaging system range from real-time determination of tumor margins during surgery to the use of this technique in the pathology lab to aid with cancer diagnosis and staging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of classifying cells expressing different biomarkers. A deep learning based method that can automatically localize and count the cells expressing each of the different biomarkers is proposed. To classify the cells, a Convolutional Neural Network (CNN) was employed. Images of Immunohistochemistry (IHC) stained slides that contain these cells were digitally scanned. The images were taken from digital scans of IHC stained cervical tissues, acquired for a clinical trial. More than 4,500 RGB images of cells were used to train the CNN. To evaluate our method, the cells were first manually labeled based on the expressing biomarkers. Then we performed the classification on 156 randomly selected images of cells that were not used in training the CNN. The accuracy of the classification was 92% in this preliminary data set. The results have shown that this method has a good potential in developing an automatic method for immunohistochemical analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images of tissue specimens enable evidence-based study of disease susceptibility and stratification. Moreover, staining technologies empower the evidencing of molecular expression patterns by multicolor visualization, thus enabling personalized disease treatment and prevention. However, translating molecular expression imaging into direct health benefits has been slow. Two major factors contribute to that. On the one hand, disease susceptibility and progression is a complex, multifactorial molecular process. Diseases, such as cancer, exhibit cellular heterogeneity, impeding the differentiation between diverse grades or types of cell formations. On the other hand, the relative quantification of the stained tissue selected features is ambiguous, tedious and time consuming, prone to clerical error, leading to intra- and inter-observer variability and low throughput. Image analysis of digital histopathology images is a fast-developing and exciting area of disease research that aims to address the above limitations. We have developed a computational framework that extracts unique signatures using color, morphological and topological information and allows the combination thereof. The integration of the above information enables diagnosis of disease with AUC as high as 0.97. Multiple staining show significant improvement with respect to most proteins, and an AUC as high as 0.99.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As advances in medical imaging technology are resulting in significant growth of biomedical image data, new techniques are needed to automate the process of identifying images of low quality. Automation is needed because it is very time consuming for a domain expert such as a medical practitioner or a biologist to manually separate good images from bad ones. While there are plenty of de-noising algorithms in the literature, their focus is on designing filters which are necessary but not sufficient for determining how useful an image is to a domain expert. Thus a computational tool is needed to assign a score to each image based on its perceived quality. In this paper, we introduce a machine learning-based score and call it the Quality of Image (QoI) score. The QoI score is computed by combining the confidence values of two popular classification techniques—support vector machines (SVMs) and Naïve Bayes classifiers. We test our technique on clinical image data obtained from cancerous tissue samples. We used 747 tissue samples that are stained by four different markers (abbreviated as CK15, pck26, E_cad and Vimentin) leading to a total of 2,988 images. The results show that images can be classified as good (high QoI), bad (low QoI) or ugly (intermediate QoI) based on their QoI scores. Our automated labeling is in agreement with the domain experts with a bi-modal classification accuracy of 94%, on average. Furthermore, ugly images can be recovered and forwarded for further post-processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gleason score is the most common architectural and morphological assessment of prostate cancer severity and prognosis. There have been numerous quantitative techniques developed to approximate and duplicate the Gleason scoring system. Most of these approaches have been developed in standard H and E brightfield microscopy. Immunofluorescence (IF) image analysis of tissue pathology has recently been proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. In this work we leverage a new method of segmenting gland rings in IF images for predicting the pathological Gleason; both the clinical and the image specific grade, which may not necessarily be the same. We combine these measures with nuclear specific characteristics as assessed by the MST algorithm. Our individual features correlate well univariately with the Gleason grades, and in a multivariate setting have an accuracy of 85% in predicting the Gleason grade. Additionally, these features correlate strongly with clinical progression outcomes (CI of 0.89), significantly outperforming the clinical Gleason grades (CI of 0.78). This work presents the first assessment of morphological gland unit features from IF images for predicting the Gleason grade.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In histopathological classification and diagnosis of cancer cases, pathologists perform visual assessments of immunohistochemistry (IHC)-stained biomarkers in cells to determine tumor versus non-tumor tissues. One of the prerequisites for such assessments is the correct identification of regions-of-interest (ROIs) with relevant histological features. Advances in image processing and machine learning give rise to the possibility of full automation in ROI identification by identifying image features such as colors and textures. Such computer-aided diagnostic systems could enhance research output and efficiency in identifying the pathology (normal, non-tumor or tumor) of a tissue pattern from ROI images. In this paper, a computational method using color-texture based extreme learning machines (ELM) is proposed for automatic tissue tumor classification. Our approach consists of three steps: (1) ROIs are manually identified and annotated from individual cores of tissue microarrays (TMAs); (2) color and texture features are extracted from the ROIs images; (3) ELM is applied to the extracted features to classify the ROIs into non-tumor or tumor categories. The proposed approach is tested on 100 sets of images from a kidney cancer TMA and the results show that ELM is able to achieve classification accuracies of 91.19% and 88.72% with a Gaussian radial basis function (RBF) and linear kernel, respectively, which is superior to using SVM with the same kernels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel graph-based method for efficient representation and subsequent classification in histological whole slide images of gastric cancer. Her2/neu immunohistochemically stained and haematoxylin and eosin stained histological sections of gastric carcinoma are digitized. Immunohistochemical staining is used in practice by pathologists to determine extent of malignancy, however, it is laborious to visually discriminate the corresponding malignancy levels in the more commonly used haematoxylin and eosin stain, and this study attempts to solve this problem using a computer-based method. Cell nuclei are first isolated at high magnification using an automatic cell nuclei segmentation strategy, followed by construction of cell nuclei attributed relational graphs of the tissue regions. These graphs represent tissue architecture comprehensively, as they contain information about cell nuclei morphology as vertex attributes, along with knowledge of neighborhood in the form of edge linking and edge attributes. Global graph characteristics are derived and ensemble learning is used to discriminate between three types of malignancy levels, namely, non-tumor, Her2/neu positive tumor and Her2/neu negative tumor. Performance is compared with state of the art methods including four texture feature groups (Haralick, Gabor, Local Binary Patterns and Varma Zisserman features), color and intensity features, and Voronoi diagram and Delaunay triangulation. Texture, color and intensity information is also combined with graph-based knowledge, followed by correlation analysis. Quantitative assessment is performed using two cross validation strategies. On investigating the experimental results, it can be concluded that the proposed method provides a promising way for computer-based analysis of histopathological images of gastric cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Oğuzhan Oğuz, Cem Emre Akbaş, Maen Mallah, Kasım Taşdemir, Ece Akhan Güzelcan, Christian Muenzenmayer, Thomas Wittenberg, Ayşegül Üner, A. Enis Cetin, et al.
In this article, algorithms for cancer stem cell (CSC) detection in liver cancer tissue images are developed. Conventionally, a pathologist examines of cancer cell morphologies under microscope. Computer aided diagnosis systems (CAD) aims to help pathologists in this tedious and repetitive work. The first algorithm locates CSCs in CD13 stained liver tissue images. The method has also an online learning algorithm to improve the accuracy of detection. The second family of algorithms classify the cancer tissues stained with H and E which is clinically routine and cost effective than immunohistochemistry (IHC) procedure. The algorithms utilize 1D-SIFT and Eigen-analysis based feature sets as descriptors. Normal and cancerous tissues can be classified with 92.1% accuracy in H and E stained images. Classification accuracy of low and high-grade cancerous tissue images is 70.4%. Therefore, this study paves the way for diagnosing the cancerous tissue and grading the level of it using H and E stained microscopic tissue images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The characteristics of immune cells in the tumor microenvironment of breast cancer capture clinically important information. Despite the heterogeneity of tumor-infiltrating immune cells, it has been shown that the degree of infiltration assessed by visual evaluation of hematoxylin-eosin (H and E) stained samples has prognostic and possibly predictive value. However, quantification of the infiltration in H and E-stained tissue samples is currently dependent on visual scoring by an expert. Computer vision enables automated characterization of the components of the tumor microenvironment, and texture-based methods have successfully been used to discriminate between different tissue morphologies and cell phenotypes. In this study, we evaluate whether local binary pattern texture features with superpixel segmentation and classification with support vector machine can be utilized to identify immune cell infiltration in H and E-stained breast cancer samples. Guided with the pan-leukocyte CD45 marker, we annotated training and test sets from 20 primary breast cancer samples. In the training set of arbitrary sized image regions (n=1,116) a 3-fold cross-validation resulted in 98% accuracy and an area under the receiver-operating characteristic curve (AUC) of 0.98 to discriminate between immune cell -rich and - poor areas. In the test set (n=204), we achieved an accuracy of 96% and AUC of 0.99 to label cropped tissue regions correctly into immune cell -rich and -poor categories. The obtained results demonstrate strong discrimination between immune cell -rich and -poor tissue morphologies. The proposed method can provide a quantitative measurement of the degree of immune cell infiltration and applied to digitally scanned H and E-stained breast cancer samples for diagnostic purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic segmentation of histological images is an important step for increasing throughput while maintaining high accuracy, avoiding variation from subjective bias, and reducing the costs for diagnosing human illnesses such as cancer and Alzheimer's disease. In this paper, we present a novel method for unsupervised segmentation of cell nuclei in stained histology tissue. Following an initial preprocessing step involving color deconvolution and image reconstruction, the segmentation step consists of multilevel thresholding and a series of morphological operations. The only parameter required for the method is the minimum region size, which is set according to the resolution of the image. Hence, the proposed method requires no training sets or parameter learning. Because the algorithm requires no assumptions or a priori information with regard to cell morphology, the automatic approach is generalizable across a wide range of tissues. Evaluation across a dataset consisting of diverse tissues, including breast, liver, gastric mucosa and bone marrow, shows superior performance over four other recent methods on the same dataset in terms of F-measure with precision and recall of 0.929 and 0.886, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional histopathology quantifies disease through the study of glass slides, i.e. two-dimensional samples that are representative of the overall process. We hypothesize that 3D reconstruction can enhance our understanding of histopathologic interpretations. To test this hypothesis, we perform a pilot study of the risk model for oral cavity cancer (OCC), which stratifies patients into low-, intermediate-, and high-risk for locoregional disease-free survival. Classification is based on study of hematoxylin and eosin (H and E) stained tissues sampled from the resection specimens. In this model, the Worst Pattern of Invasion (WPOI) is assessed, representing specific architectural features at the interface between cancer and non-cancer tissue. Currently, assessment of WPOI is based on 2D sections of tissue, representing complex 3D structures of tumor growth. We believe that by reconstructing a 3D model of tumor growth and quantifying the tumor-host interface, we can obtain important diagnostic information that is difficult to assess in 2D. Therefore, we introduce a pilot study framework for visualizing tissue architecture and morphology in 3D from serial sections of histopathology. This framework can be used to enhance predictive models for diseases where severity is determined by 3D biological structure. In this work we utilize serial H and E-stained OCC resections obtained from 7 patients exhibiting WPOI-3 (low risk of recurrence) through WPOI-5 (high risk of recurrence). A supervised classifier automatically generates a map of tumor regions on each slide, which are then co-registered using an elastic deformation algorithm. A smooth 3D model of the tumor region is generated from the registered maps, which is suitable for quantitative tumor interface morphology feature extraction. We report our preliminary models created with this system and suggest further enhancements to traditional histology scoring mechanisms that take spatial architecture into consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: Automatic cell segmentation plays an important role in reliable diagnosis and prognosis of patients. Most of the state-of-the-art cell detection and segmentation techniques focus on complicated methods to subtract foreground cells from the background. In this study, we introduce a preprocessing method which leads to a better detection and segmentation results compared to a well-known state-of-the-art work. Method: We transform the original red-green-blue (RGB) space into a new space defined by the top eigenvectors of the RGB space. Stretching is done by manipulating the contrast of each pixel value to equalize the color variances. New pixel values are then inverse transformed to the original RGB space. This altered RGB image is then used to segment cells. Result: The validation of our method with a well-known state-of-the-art technique revealed a statistically significant improvement on an identical validation set. We achieved a mean F1-score of 0.901. Conclusion: Preprocessing steps to decorrelate colorspaces may improve cell segmentation performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in computer vision enable increasingly accurate automated pattern classification. In the current study we evaluate whether a convolutional neural network (CNN) can be trained to predict disease outcome in patients with colorectal cancer based on images of tumor tissue microarray samples. We compare the prognostic accuracy of CNN features extracted from the whole, unsegmented tissue microarray spot image, with that of CNN features extracted from the epithelial and non-epithelial compartments, respectively. The prognostic accuracy of visually assessed histologic grade is used as a reference. The image data set consists of digitized hematoxylin-eosin (H and E) stained tissue microarray samples obtained from 180 patients with colorectal cancer. The patient samples represent a variety of histological grades, have data available on a series of clinicopathological variables including long-term outcome and ground truth annotations performed by experts. The CNN features extracted from images of the epithelial tissue compartment significantly predicted outcome (hazard ratio (HR) 2.08; CI95% 1.04-4.16; area under the curve (AUC) 0.66) in a test set of 60 patients, as compared to the CNN features extracted from unsegmented images (HR 1.67; CI95% 0.84-3.31, AUC 0.57) and visually assessed histologic grade (HR 1.96; CI95% 0.99-3.88, AUC 0.61). As a conclusion, a deep-learning classifier can be trained to predict outcome of colorectal cancer based on images of H and E stained tissue microarray samples and the CNN features extracted from the epithelial compartment only resulted in a prognostic discrimination comparable to that of visually determined histologic grade.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-definition (HD) Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that not only enables chemistry-based visualization of tissue constituents, and label free extraction of biochemical information but its higher spatial detail makes it a potentially useful platform to conduct digital pathology. This methodology, along with fast and efficient data analysis, can enable both quantitative and automated pathology. Here we demonstrate a combination of HD FT-IR spectroscopic imaging of breast tissue microarrays (TMAs) with data analysis algorithms to perform histologic analysis. The samples comprise four tissue states, namely hyperplasia, dysplasia, cancerous and normal. We identify various cell types which would act as biomarkers for breast cancer detection and differentiate between them using statistical pattern recognition tools i.e. Random Forest (RF) and Bayesian algorithms. Feature optimization is integrally carried out for the RF algorithm, reducing computation time as well as redundant spectral features. We achieved an order of magnitude reduction in the number of features with comparable prediction accuracy to that of the original feature set. Together, the demonstration of histology and selection of features paves the way for future applications in more complex models and rapid data acquisition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major focus area for precision medicine is in managing the treatment of newly diagnosed prostate cancer patients. For patients with a positive biopsy, clinicians aim to develop an individualized treatment plan based on a mechanistic understanding of the disease factors unique to each patient. Recently, there has been a movement towards a multi-modal view of the cancer through the fusion of quantitative information from multiple sources, imaging and otherwise. Simultaneously, there have been significant advances in machine learning methods for medical prognostics which integrate a multitude of predictive factors to develop an individualized risk assessment and prognosis for patients. An emerging area of research is in semi-supervised approaches which transduce the appropriate survival time for censored patients. In this work, we apply a novel semi-supervised approach for support vector regression to predict the prognosis for newly diagnosed prostate cancer patients. We integrate clinical characteristics of a patient’s disease with imaging derived metrics for biomarker expression as well as glandular and nuclear morphology. In particular, our goal was to explore the performance of nuclear and glandular architecture within the transduction algorithm and assess their predictive power when compared with the Gleason score manually assigned by a pathologist. Our analysis in a multi-institutional cohort of 1027 patients indicates that not only do glandular and morphometric characteristics improve the predictive power of the semi-supervised transduction algorithm; they perform better when the pathological Gleason is absent. This work represents one of the first assessments of quantitative prostate biopsy architecture versus the Gleason grade in the context of a data fusion paradigm which leverages a semi-supervised approach for risk prognosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our group is developing a method to examine biological specimens in cellular detail using synchrotron microCT. The method can acquire 3D images of tissue at micrometer-scale resolutions, allowing for individual cell types to be visualized in the context of the entire specimen. For model organism research, this tool will enable the rapid characterization of tissue architecture and cellular morphology from every organ system. This characterization is critical for proposed and ongoing “phenome” projects that aim to phenotype whole-organism mutants and diseased tissues from different organisms including humans. With the envisioned collection of hundreds to thousands of images for a phenome project, it is important to develop quantitative image analysis tools for the automated scoring of organism phenotypes across organ systems. Here we present a first step towards that goal, demonstrating the use of support vector machines (SVM) in detecting retinal cell nuclei in 3D images of wild-type zebrafish. In addition, we apply the SVM classifier on a mutant zebrafish to examine whether SVMs can be used to capture phenotypic differences in these images. The longterm goal of this work is to allow cellular and tissue morphology to be characterized quantitatively for many organ systems, at the level of the whole-organism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a method to automatically segment and count the rhesus choroid-retinal vascular endothelial cells (RF/6A) in fluorescence microscopic images which is based on shape classification, bottleneck detection and accelerated Dijkstra algorithm. The proposed method includes four main steps. First, a thresholding filter and morphological operations are applied to reduce the noise. Second, a shape classifier is used to decide whether a connected component is needed to be segmented. In this step, the AdaBoost classifier is applied with a set of shape features. Third, the bottleneck positions are found based on the contours of the connected components. Finally, the cells segmentation and counting are completed based on the accelerated Dijkstra algorithm with the gradient information between the bottleneck positions. The results show the feasibility and efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative “fingerprint” of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of cell nuclei plays a key role in various histopathological image analysis problems. Considering the high variability of its applications, we propose a novel generic and trainable detection approach. Adaption to specific nuclei detection tasks is done by providing training samples. A trainable deconvolution and classification algorithm is used to generate a probability map indicating the presence of a nucleus. The map is processed by an extended watershed segmentation step to identify the nuclei positions. We have tested our method on data sets with different stains and target nuclear types. We obtained F1-measures between 0.83 and 0.93.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.