This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Automatic upper airway segmentation in static and dynamic MRI via deep convolutional neural networks
Localization and segmentation of optimal slices for chest fat quantification in CT via deep learning
Super-mask-based object localization for auto-contouring in head and neck radiation therapy planning
The purpose of this paper is to introduce new features of the AAR-recognition approach (abbreviated as AAR-R from now on) of combining texture and intensity information into the recognition procedure, using the optimal spanning tree to achieve the optimal hierarchy for recognition to minimize recognition errors, and to illustrate recognition performance by using large-scale testing computed tomography (CT) data sets. The data sets pertain to 216 non-serial (planning) and 82 serial (re-planning) studies of head and neck (H&N) cancer patients undergoing radiation therapy, involving a total of ~2600 object samples. Texture property “maximum probability of occurrence” derived from the co-occurrence matrix was determined to be the best property and is utilized in conjunction with intensity properties in AAR-R. An optimal spanning tree is found in the complete graph whose nodes are individual objects, and then the tree is used as the hierarchy in recognition. Texture information combined with intensity can significantly reduce location error for glandrelated objects (parotid and submandibular glands). We also report recognition results by considering image quality, which is a novel concept. AAR-R with new features achieves a location error of less than 4 mm (~1.5 voxels in our studies) for good quality images for both serial and non-serial studies.
In this study, CT scans were converted into STereoLithography (STL) file format. The subsequent STL files were transformed into 3D printable G-Code using the Slic3r software. This allowed us to customize the parameters of our print and we were able to choose a layer thickness of 0.1mm. A desktop 3D bioprinter (BioBot 1) was then used to construct the scaffold.
This method resulted in the production of a PCL scaffold that precisely matched the patient’s nasal septal defect, in both size and shape. This serves as the first step in our goal to create patient-specific tissue engineered nasal septal cartilage grafts for NSP repair.
In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each.
We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
The aim of this study is to explore image-based features of thoracic adipose tissue on pre-operative chest CT to distinguish between the above two groups of patients. 140 unenhanced chest CT images from three lung transplant centers (Columbia, Penn, and Duke) are included in this study. 124 patients are in the successful group and 16 in failure group. Chest CT slices at the T7 and T8 vertebral levels are captured to represent the thoracic fat burden by using a standardized anatomic space (SAS) approach. Fat (subcutaneous adipose tissue (SAT)/ visceral adipose tissue (VAT)) intensity and texture properties (1142 in total) for each patient are collected, and then an optimal feature set is selected to maximize feature independence and separation between the two groups. Leave-one-out and leave-ten-out crossvalidation strategies are adopted to test the prediction ability based on those selected features all of which came from VAT texture properties. Accuracy of prediction (ACC), sensitivity (SEN), specificity (SPE), and area under the curve (AUC) of 0.87/0.97, 0.87/0.97, 0.88/1.00, and 0.88/0.99, respectively are achieved by the method. The optimal feature set includes only 5 features (also all from VAT), which might suggest that thoracic VAT plays a more important role than SAT in predicting PGD in lung transplant recipients.
and the skeleton and pleural spaces used as a reference objects
Affinity functions: recognizing essential parameters in fuzzy connectedness based image segmentation
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
3D imaging defines, visualizes, manipulate, and analyzes information captured in multidimensional image data. This course provides a systematic overview of the principles underlying the techniques for some of these key operations, highlights some of the current hurdles, and provides some medical application examples by way of illustration.
View contact details
No SPIE Account? Create one