PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Approaches to object information extraction from images should attempt to use the fact that images are fuzzy. In past image segmentation research, the notion of `hanging togetherness' of image elements specified by their fuzzy connectedness has been lacking. We present a theory of fuzzy objects for n-dimensional digital spaces based on a notion of fuzzy connectedness of image elements. Although our definitions lead to problems of enormous combinatorial complexity, the theoretical results allow us to reduce this dramatically. We demonstrate the utility of the theory and algorithms in image segmentation based on several practical examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Building 3D models from unstructured data is a fundamental problem that arises increasingly in the medical field as new 3D scanning technology is able to produce large and complex databases of full 3D information. In addition, the huge efforts put into segmenting entire sets of 2D images demand robust tools that are then able to reconstruct any arbitrary 3D surface segmented from the images. In this paper we propose an algorithmic methodology that automatically produces a simplicial surface from a set of points in R3 about which we have no topological knowledge. Our method uses a spatial decomposition and a surface tracking algorithm to produce a rough approximation S' of the unknown manifold S. The produced surface S' serves as a robust initialization for a physically based modeling technique that incorporates the fine details of S and improves the quality of the reconstruction. The result of the reconstruction is a dense triangulation S' that undergoes a stage of mesh decimation to produce a compact representation of S.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general framework for fast visualization of multispectral volume data is presented. Dedicated hardware with a non-numeric coprocessor is utilized in the first step of the rendering pipeline to process the volume data and extract voxels according to feature characteristics. This capability is used to select voxels according to automatic classification results or real-time descriptions of regions of interest supplied by the user in an interactive environment. By this step we can in real-time reduce the number of voxels that have to be considered in the rendering and increase the speed of the volume rendering accordingly. The selected voxels are generated in a front-to-back (or back-to-front) order and projected to the view plane where a 3D rendering is accumulated with an adaptation of the shell rendering technique proposed by Udupa and Odhner. The paper includes an overview of the underlying hardware architecture and presents numerical experiments with a software simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shell rendering uses a recently described data structure, called a shell, for representing semi- transparent volumes. The shell is a generalization of the semi-boundary representation used for fast visualization, manipulation, and analysis of binary volumetric objects. Shell manipulation is an extension of the previously developed manipulation operations on binary objects to fuzzily defined objects. The paper describes algorithms for plane and curved cut-away, separation, and segmental movement operations. These operations are done interactively on volume renditions of the objects in a standard workstation environment using portable software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intraoperative brain mapping of cortical language sites has generated a need for accurate visualization of the cortical surface with its associated arteries and veins. We describe a technique for registering multiple magnetic resonance imaging studies, segmenting those studies, and visualizing the combined vessels and cortical surface. The patient receives three pre-operative MRI scans corresponding to the three tissue types to be visualized. The studies are read into three volumes in the visualization software and voxel size information is then used to interpolate each of the volumes producing cubic voxels. The volumes are then cropped and translated interactively to be the exact same dimension and orientation. A region growing algorithm is applied to the most homogeneous volume (usually the vein data) to produce a mask which approximates the cortical region. A morphological dilation is performed on the mask expanding it to include features on or near the cortical surface. The mask is applied to the cortical surface, artery, and vein volumes, and they then become the green, red, and blue channels of a composite RGB volume. The resulting volume is selectively weighted and input to a ray-tracing algorithm which produces the final image. This technique provides neurosurgeons with an image containing the landmarks necessary to record intraoperative brain mapping data. Example results are presented which show that the generated cortical surface, including surface veins and arteries, corresponds closely to intraoperative photographs of the exposed cortical surface taken at the time of surgery. When combined with an image database and integrated into an interactive program this technique allows neurosurgeons to obtain accurate 3-D stimulation maps of functional areas on the brain surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SPECT is a powerful clinical tool. However, the low spatial resolution and ill-defined boundaries associated with SPECT require special consideration in visualization. Quantitative geometric and magnitude information are areas of particular usefulness in evaluating disease states. In this paper, we describe a set of practical 3D visualization tools to display and analyze SPECT data, and present interactive methods to measure (1) the relative position, size and shape of regions of interest and (2) the magnitude and distribution of radioactive count information. Interactive pick tools allow users to extract values at selected points, distance between points, or value profiles along selected line segments. In the three-dimensional reconstruction, transparent and opaque isosurfaces are formed simultaneously at specified activity levels, and the volume enclosed by the opaque surface is displayed. The utility of these tools is demonstrated with two types of patient studies: those using tumor-avid agents to identify active tumor in the chest and abdomen, and those used for evaluating the volume of perfused myocardium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time volume rendering of medical image datasets on commercial hardware became possible in 1993. We have developed an application, SeeThru, that allows real-time volume visualization under the interactive control of the physician. This ability enables the physician to look inside of the patient's body to visually comprehend the information from radiological procedures, resulting in improved treatment planning. We report on preliminary results from two areas: (1) cardiothoracic surgical planning from spiral computed tomography (CT) and (2) staging of breast cancer from magnetic resonance imaging (MRI). We compared different rendering methods (projection, maximum intensity projection, opacity blended, and opacity combined with gradient blended) and chose opacity blending as the most effective for both applications. In cardiothoracic surgical planning experiment we found the ability to interactively control and view 3D direct volume visualizations resulted in improvements in surgical plans and in the surgeon's confidence in the plan. In the MR breast experiment we found that 3D visualization of the subtraction images improved comprehension and identification of tumor lesions difficult to appreciate on mammograms. Overall, we believe that interactive, real-time volume rendering significantly adds to clinical understanding and improves treatment planning for the patient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximum intensity projection (MIP) is widely used in obtaining 2D projections (angiograms) of MR angiographic (MRA) data. The flowing blood appears bright and the stationary tissues show up dark in the data. With the MIP algorithm, parallel rays are cast through the MR slices (image volume) and the maximum intensity along each ray is displayed in the projection image. In the ray casting process, conventional methods resample the image volume at evenly spaced locations along the projection ray using either the nearest neighbor method or the 3D tri-linear interpolation method. High order interpolation is expected to give a more accurate approximation in resampling but with higher computational cost. In this paper, we introduce a low cost modified MIP algorithm, which enjoys the performance of using higher order interpolation, and gives projection image with better contrast, higher vessel continuity and visibility and less jagged vessel artifacts than the conventional methods. Besides, in order to increase the spatial resolution of the projection image, methods such as pre-zooming before the MIP, post-zooming after the MIP and sampling the projection image at higher rate are discussed and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer assisted 3D visualization of the human cerebro-vascular system can help to locate blood vessels during diagnosis and to approach them during treatment. Our aim is to reconstruct the human cerebro-vascular system from the partial information collected from a variety of medical imaging instruments and to generate a 3D graphical representation. This paper describes a tool developed for 3D visualization of cerebro-vascular structures. It also describes a symbolic approach to modeling vascular anatomy. The tool, called Ispline, is used to display the graphical information stored in a symbolic model of the vasculature. The vascular model was developed to assist image processing and image fusion. The model consists of a structural symbolic representation using frames and a geometrical representation of vessel shapes and vessel topology. Ispline has proved to be useful for visualizing both the synthetically constructed vessels of the symbolic model and the vessels extracted from a patient's MR angiograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and implemented a computer-based system for three-dimensional stereotactic planning of minimally invasive neurosurgical procedures. The system integrates rapid acquisition of digital medical images, segmentation, multi-modality registration, and three-dimensional planning capabilities. Emphasis on real-time planning is central to our system: imaging, pre-processing and planning are performed on the morning of surgery in clinically useful times. We have tested the system on procedures such as needle biopsies, depth electrode placements, pallidotomies, thalamotomies and craniectomies for arteriovenous malformations, aneurysms and tumors. We describe in this paper the core algorithms of our system, and discuss issues related to implementation, validation and user acceptance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The CIeMed electronic brain atlas system contains electronic versions of multiple paper brain atlases with 3D extensions; some other 3D brain atlases are under development. Its primary goal is to provide automatic labeling and quantification of brains. The atlas data are digitized, enhanced, color coded, labeled, and organized into volumes. The atlas system provides several tools for registration, 3D display and real-time manipulation, object extraction/editing, quantification, image processing and analysis, reformatting, anatomical index operations, and file handling. The two main stereotactic atlases provided by the system are electronic and enhanced versions of Atlas of Stereotaxy of the Human Brain by Schaltenbrand and Wahren and Co-Planar Stereotactic Atlas of the Human Brain by Talairach and Tournoux. Each of these atlases has its own strengths and their combination has several advantages. First, a complementary information is merged and provided to the user. Second, the user can register data with a single atlas only, as the Schaltenbrand-Wahren-Talairach-Tournoux registration is data-independent. And last but not least, a direct registration of the Schaltenbrand-Wahren microseries with MRI data may not be feasible, since cerebral deep structures are usually not clearly discernible on MRI images. This paper addresses registration of the Schaltenbrand- Wahren and Talairach-Tournoux brain atlases. A modified proportional grid system transformation is introduced and suitable sets of landmarks identifiable in both atlases are defined. The accuracy of registration is discussed. A continuous navigation in the multi- atlas/patient data space is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image guided surgery is defined as invasive therapy which is partially or wholly guided by images of the part of the body undergoing treatment produced by mechanical or electronic means. Since modern diagnostic scanners extensively use computers to produce images, surgery based on these images requires the use of cathode ray tube (CRT) monitors to display the images. Such monitors are not well suited for surgery. We describe the integration of a head mounted display (HMD) into a surgical localization system and our experience with use of the device in 8 patients undergoing intracranial procedures. Use of the HMD facilitates use of the localization system, to delineate tumor tissue from surrounding normal brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration of image space and physical space lies at the heart of any interactive, image- guided neurosurgery system. We have developed a localization technique that enables permanently implanted fiducial markers to be used for the registration of these spaces. Permanently implanted markers are desirable for surgical follow-up, monitoring of therapy efficacy, fractionated stereotactic radiosurgery and improved patient comfort over stereotactic frames. Bone-implanted extrinsic fiducial markers represent a potential long-term mechanism for registration. The major challenge to using implanted markers is the localization of the markers in physical space after implantation. We have developed and tested an A-mode ultrasound technique for determining the location of small cylindrical markers (3.7 mm in diameter, 3 mm in height). Accuracy tests were conducted on a phantom of a patient's head. The accuracy of the system was characterized by comparing the location of a marker analogue with an optically tracked pointer and with the ultrasound localization. Analyzing the phantom in several orientations revealed a mean system accuracy of 0.5 mm with a +/- 0.1 mm 95% confidence interval. These initial tests indicate that transcutaneous localization of implanted fiducial markers is possible with a high degree of accuracy. The results of these experiments will help in determining a final marker design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a 3D wavelet compression algorithm for medical images that achieves a good reconstruction quality at high compression ratios. The algorithm applies a 3D wavelet transformation to a volume image set, followed by a scalar quantization and entropy coding to the wavelet coefficients. We also implemented a parallel version of the 3D compression algorithm in a local area network environment. Multiple processors on different workstations on the network are utilized to speed up the compression or decompression process. The 3D wavelet transform has been applied to 3D MR volume images and the results are compared with the results obtained using a 2D wavelet compression. Compression ratios achieved with the 3D algorithm are 40 - 90% higher than that of using the 2D compression algorithm. The results of applying parallel computing to the 3D compression algorithm indicate that the efficiency of the parallel algorithm ranges from 80 - 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An original 3D subband coding scheme based on a separable 3D wavelet transform is proposed. The 3D images (volumes) are produced with a new true 3D x ray scanner called `Morphometer.' The Morphometer can generate 2563 discrete volumes with isotropic voxels of 356 microns. A separable 3D wavelet decomposes the original volume. A distortion minimization algorithm selects the best number of decompositions and chooses for each subvolume the most appropriate quantization approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The newly developed discrete wavelet transform (DWT) compression method is far superior to previous full frame discrete cosine transform (FFDCT) as well as industrial standard JPEG. Due to its localization properties both in spatial and transform domain, the quantization error introduced in DWT will not propagate globally as in FFDCT. Also DWT transform is a global technique that avoids the JPEG type block artifacts. As in all techniques, correlation among pixels makes compression possible. In volumetric image sets, such as CT and MR, inter-slice correlation can be exploited in addition to in-slice correlation. In this 3D DWT study, inter- slice correlation has also been investigated for CT and MR image set. Different numbers of slices are grouped together to perform wavelet transform in the transaxiale direction as a mean of testing relationship between correlation and compression efficiency. The 3D DWT is developed on UNIX platform. Significant higher compression ratio is achieved by compressing CT data as a volume versus one slice at a time. DWT is an excellent technique for exploiting inter-slice correlation to gain additional compression efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The JPEG lossless compression technique uses pixel value prediction based on the nearest neighbor pixel values. Usually a single predictor is used for the entire image. Recent work has shown that better compression performance can be achieved by choosing the predictors adaptively depending on the context of surrounding pixel or predictor values. This method is computationally lengthy and memory intensive. In mammograms the image contents can be separated into three distinct visual classes: background, smooth and textured, corresponding to three classes of predictors available in JPEG. This paper discusses an approach to exploiting the use of these classes directly for predictor choice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite over a decade of research and development, medical image compression has not yet been widely implemented on clinical picture archiving and communication systems (PACS). We have developed a prototype interface which incorporates both lossless and lossy compression into a browsing system that enables the efficient use of network and storage resources. Such a system allows a user to quickly browse through a large set of image icons created from lossy compression and selectively retrieve the original images for diagnosis from the optical disk that contains losslessly compressed image data. For lossless compression, we implemented modality specific techniques which combines preprocessing, adaptive prediction and entropy coding, giving a compression improvement of 20% over JPEG predictors. The lossy compression algorithm consists of subsampling followed by wavelet transform coding and achieves compressed CR images of sufficient quality for browsing at a compression ratio of about 2000:1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image data compression can be useful for storage or transmission of cardiac angiograms. In clinical systems, images are recorded in a `raw' format, and are usually processed with an edge enhancement filter to improve the visibility of medical information. The raw images are needed for other processing including quantitative measurements, and their enhanced version is used for display. We report on a compression scheme based on full-frame DCT which allows the integration of enhancement in the codec. We investigated whether the raw or the enhanced image should be compressed. We studied an inverse filter and integrated it in the decompression process, so that a non-enhanced image can be derived after enhancement and compression. The de-enhancement filter acts as a low pass filter for the quantization noise. We proposed to improve the inverse filter using a regularized signal restoration technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous papers, we presented an overlapped transform-coding method for efficient data compression of medical x-ray image series. In this paper, we address two improvements of this method. Firstly, we applied the method to so-called enhanced x-ray images, i.e. to images of which the middle and high frequencies had been emphasized. In this paper, we explain how to code raw instead of enhanced images under the constraint that enhancement of the coded raw images does not lead to a clear visibility of coding artefacts. Secondly, the coding artefacts introduced at high compression ratios by the previously published method are more clearly visible in the dark than in the bright areas of an image. In this paper, we explain what provisions can be added to our data-compression system to achieve a better balance between the perceptual image quality in dark and bright areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a predictive learning tree-structured vector quantization technique for medical image compression. A multi-layer perceptron (MLP) based vector predictor is employed to remove first as well as higher order correlations that exist among neighboring pixels. We use a learning tree-structured vector quantization (LTSVQ) scheme, which is based on competitive learning (CL) algorithm, to encode the residual vector. LTSVQ algorithm is computationally very efficient, easy to implement and provides performance comparable to that of LBG (Linde, Buzo and Gray) algorithm. We use computerized image analysis (image segmentation) as well as mean square error (MSE) and signal-to-noise ratio (SNR) to evaluate the quality of the compressed images. We apply the neural network based predictive LTSVQ to mammographic and magnetic resonance (MR) images, and evaluate the quality of images with different compression ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To make the transition from film to CRT viewing of radiologic images, it is necessary to fully understand what the viewer requires in order to make a confident diagnostic decision. As a preliminary step to installing an image display workstation in our neonatal (NICU) and pediatric (PICU) ICU areas, a requirements analysis was conducted. Interviews were conducted to determine what would be desired in a display workstation, and detailed observations were made of daily procedures in the pediatric and neonatal ICUs. Portable diagnostics (i.e., CR images) constitute the greatest number of images taken. Very few images from other modalities are taken on a regular basis, although traditional film images are taken somewhat frequently. The data indicate that the majority of PICU and NICU images which are of concern to the attending ICU clinicians (i.e., CR) would be available directly for softcopy display on a workstation. A workstation in the radiology reading room would, however, require access to all possible types of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work considers the use of digital halftones in the display of medical images. One might assume that the use of halftone rendering (as opposed to continuous tone image rendering) will degrade the information in medical images, therefore, it is interesting to study what degree of degradation is unacceptable in medical images. We analyze various halftoning techniques quantitatively by first generating low-contrast detail diagrams (CDD) made to represent computed tomography (CT), magnetic resonance (MR), and ultrasound (US) modality images. These are then halftoned and printed using error diffusion, Bayer's method, blue noise mask, and centered weighted dots. The contrast areas in the diagram are randomly placed on a 5 X 5 grid. A single observer is used to determine the minimum contrast `lesion' that could be observed. The results for minimum detectable contrast depend on resolution (dots per inch), modality, and halftoning technique. It is shown that acceptable halftone rendering, with small degradation, can be achieved under certain conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The challenge in providing picture archiving and communication systems (PACS) in the modern managed care environment is to justify its cost, while still providing the required services. The only solution which achieves the economic goal is the elimination of film and its associated costs in favor of PACS. Recognizing that some hardcopy will always be desired, an acceptable inexpensive paper print alternative is required. Our initial networked paper printed solution has replaced all film use in nuclear medicine while still providing economical color and monochrome paper output when required. Extensions of this print architecture for ultrasound, CT and MR are planned for the near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Initial measurements of certain display characteristics of a number of cathode ray tubes (CRTs) from different manufacturers are presented. Our aim was to compare the performance of these devices with respect to several critical parameters that affect the display of medical images. We designed a custom CCD imaging system to capture these display characteristics as images. Our CCD imaging system is reasonably portable so that it can travel to a clinical setting. Since a thorough calibration of the CCD imaging system is not completed, the results are reported for comparison purposes only. It was observed that most of the gray scale CRTs that we have studied have similar performances. Results obtained when a color CRT (shadow mask tube) is used to display a gray scale image are also presented. Two sections of the lung region of a digitally acquired computed radiography (CR) PA view chest image were presented on soft-copy and hard-copy displays. Images of these displays were acquired using the CCD imaging system and are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The CRT is commonly used to display digital medical image data. The brightness as a function of input signal is different for each CRT, is nonlinear and poorly matched to the human eye's perception of brightness change, decreases over time, and is easily changed with hardware contrast and brightness adjustments. Previous studies have suggested display brightness functions which are based on human visual experiments and which produce the same contrast perception for small display value changes at all brightness levels. The best scale depends on CRT luminance and scene dependent variables. We have developed an X window based client server approach to maintain perceptually linear display scales using unique transformation tables for many display devices. Color use on X window systems is described. Perceptually linear human visual models are described. Finally we present a method to implement and maintain these models on a networked collection of X displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Picture archiving and communication systems (PACS) use large numbers of video monitors distributed throughout a hospital to display images to physicians. While luminance transfer characteristics of monitors may be measured in some institutions, resolution and other image quality factors are frequently ignored, except in research settings. This work centered around developing a quality control tool and identifying a set of measurements with which to measure display image quality. The tool consisted of an EG&G gamma scientific telemicroscope on a translation table, controlled by a portable computer and positioned on a movable cart. With this tool, we were able to make highly accurate measurements on monitors at multiple locations in the hospital. We settled on measurements consisting of spatial resolution, luminance uniformity, and stability of the display function under different conditions. We concentrated on three pairs of high quality monitors, each pair used together in a clinical or research setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical images are increasingly being presented on soft-copy displays such as CRTs but without, in our opinion, consistent visualization of the medical image data. Reaction to earlier calls for implementing a display standard for medical images has been slow. This has prompted us to write a tutorial which we hope will accelerate the acceptance of standardized image presentation on soft-copy displays in electronic radiology. The types of medical images and their visualization (luminance tone scale and dynamic range) are discussed. The impact of ambient lighting on the observed tone scale is also analyzed. Since the human observer is the detector of medical images, we review the critical parameters that characterize the human visual system, HVS [H. Blume, S. Daly, and E. Muka, 'Presentation of Medical Images on CRT Displays -- A Renewed Proposal for a Display Function Standard,' Proc. SPIE Vol. 1897 Image Capture, Formatting, and Display, pp. 215 - 231, (1993)]. We provide additional information regarding the proposed mathematical representation of the HVS `display function' and threshold contrast modulation and show how they are related. We discuss the differences between the desired visualization of a set of medical data versus the display function of a soft-copy display such as a CRT. To facilitate the objective that images can be consistently rendered, we repeat our call for a standardized display function for soft- copy displays and believe that is should be based on the HVS. We discuss which medical image data should be perceptually linearized throughout the medical data dynamic range. These points are demonstrated using typical CT images and digitized projection radiographs presented with different gray scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high fidelity display of digital medical radiographs requires devices with high detail (3000 X 3500 arrays, 120 micrometers pixels), high brightness (2000 cd/m2), and high dynamic range (400). Medical radiographic film meets these requirements when transilluminated with bright illuminators. Currently available electronic displays using CRT technology are not able to provide the needed fidelity. New flat panel emissive display technologies offer potential solutions, particularly plasma displays and vacuum microelectronic displays. The NCAICM at Sandia National Laboratories is focusing particular attention on emissive display technologies. Commercial flat panel, color systems with improved fidelity are now becoming available. Static, monochrome designs using vacuum microelectronic technology offer the potential to provide a high fidelity display for medical radiography which meets these requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Building on the success of the MediaStation 5000 (MS5000) multimedia system which was developed in our laboratory in 1994, we have developed a prototype telemedicine workstation which is programmable and supports high-bandwidth telecommunications links to connect together many medical treatment facilities. The system can support various telemedicine and consultation functions to collaboratively transfer, manipulate, and view radiological images, image sequences, audio, and video. The requirements for a telemedicine workstation include high performance, flexibility, and upgradability. The unique components of our workstation include an advanced parallel processor, highly integrated multimedia support circuitry, high- speed network interface, and a graphical user interface. Each of these is a key ingredient to a successful telemedicine system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intracranial aneurysms are the primary cause of non-traumatic subarachnoid hemorrhage. Morbidity and mortality remain high even with current endovascular intervention techniques. It is presently impossible to identify which aneurysms will grow and rupture, however hemodynamics are thought to play an important role in aneurysm development. With this in mind, we have simulated blood flow in laboratory animals using three dimensional computational fluid dynamics software. The data output from these simulations is three dimensional, complex and transient. Visualization of 3D flow structures with standard 2D display is cumbersome, and may be better performed using a virtual reality system. We are developing a VR-based system for visualization of the computed blood flow and stress fields. This paper presents the progress to date and future plans for our clinical VR-based intervention simulator. The ultimate goal is to develop a software system that will be able to accurately model an aneurysm detected on clinical angiography, visualize this model in virtual reality, predict its future behavior, and give insight into the type of treatment necessary. An associated database will give historical and outcome information on prior aneurysms (including dynamic, structural, and categorical data) that will be matched to any current case, and assist in treatment planning (e.g., natural history vs. treatment risk, surgical vs. endovascular treatment risks, cure prediction, complication rates).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical imaging applications have growing processing requirements, and scalable multicomputers are needed to support these applications. Scalability -- performance speedup equal to the increased number of processors -- is necessary for a cost-effective multicomputer. We performed tests of performance and scalability on one through 16 processors on a RACE multicomputer using Parallel Application system (PAS) software. Data transfer and synchronization mechanisms introduced a minimum of overhead to the multicomputer's performance. We implemented magnetic resonance (MR) image reconstruction and multiplanar reformatting (MPR) algorithms, and demonstrated high scalability; the 16- processor configuration was 80% to 90% efficient, and the smaller configurations had higher efficiencies. Our experience is that PAS is a robust and high-productivity tool for developing scalable multicomputer applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cardiology review station for clinical use has been developed which can decompress, zoom, and display full resolution JPEG-encoded cardiac angiograms and echocardiograms in real- time (30 frames/second). The review station is installed in a network which includes a digital image archival system throughout Duke University Medical Center. The review station consists of a DEC 3000/600 AXP workstation with a DEC J300 sight and sound multimedia board. The J300 is used to decompress up to 45 true color JPEG-encoded frames/sec. at normal (512 X 512) resolution and 30 frames/sec. at 1.5 X zoom. Digitally acquired 512 X 512 X 8 bit monochrome angiograms as well as 600 X 430 X 8 bit color and monochrome echocardiograms are transferred to the workstation where they are JPEG-encoded in real time using the J300 board. A compression factor of approximately 15:1 is being used. A graphical user interface (GUI) developed using OSF/Motif 1.2 enables a clinical user to simultaneously display and control several image sequences. A sequence can be retrieved in under three seconds and displayed dynamically in forward or reverse directions with instantaneous speed control. Utilizing a commercial relational database (Sybase), the GUI organizes image sequences for a patient by image modality, location, time, and view. A schematic representation of cardiac anatomy allows a user to view specific angiographic image sequences by selecting appropriate objects in the anatomic diagram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the architecture and capabilities of a compute-aided neurodiagnosis workstation prototype developed to integrate a picture archiving and communication system (PACS), automated image registration, and volumetric visualization to aid the non-invasive workups of epilepsy patients. This new approach of marrying volume visualization and image registration with a large scale PACS archive would make significant potential contribution to neurological diagnoses and pre-operative planning. This medical workstation can access and analyze multimodal brain images and patient records archived in the standardized PACS. Brain imaging modalities currently under study include magnetic resonance imaging, positron emission tomography, magnetic resonance spectroscopy, and magnetoencephalography. The graphical user interface (GUI) is written on top of the popular X-window environment. It enables the physician to extract functional and structural information from multimedia patient data stored in PACS and textual information systems, as well as archiving them into a remote database server for future image indexing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high resolution mammographic display station is implemented for clinical diagnosis and for a digital teaching file. The display consists of a specially designed, high resolution mammographic station which contains a connection to a 50 micron (variable spot size) laser film digitizer, two 2 K X 2.5 K display monitors, an image processor, a host computer, and a disk array for high speed image transfer to the display monitors. After digitization on a separate host computer, the files are immediately transferred to the display station and post- processed for viewing. The algorithm for post-processing of the digitized image applies a non- linear LUT to mimic the original film characteristics while taking into account the luminosity of the display monitors in an attempt to produce the highest digital image quality possible. Image processing functions for enhancing calcification and soft tissue are also available to assist the human observer in classification of objects within the image. Windowing and level controls are seamlessly integrated for each monitor, as well as magnification capabilities. For an image display at its full resolution (e.g., digitized at 100 microns), the magnification is accomplished with a roaming window utilizing simple 2X pixel replication. This has been found to be acceptable in preliminary tests with clinicians. Measurements of features on the 2 k displays are possible, as well. The display format accurately simulates mammographic viewing arrangements with automatic side-by-side historical, current, left and right craniocaudal, mediolateral, etc., view comparisons. This high resolution mammographic display is found to be essential for fast and accurate display of high resolution digitized mammograms. A digital mammographic teaching file has been designed and tested using this display architecture. The teaching file presents the case questions on the host display monitor, and the related images for each question are presented on the high resolution displays. The full functionality of the aforementioned high resolution mammogram display is held intact, so that the images can be examined with the full range of tools available: image processing, magnification, window/level control, and feature measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer tomography using cone beam x ray generates a great amount of image data. Reconstruction of the three-dimensional (3D) image requires considerable processing power. We have developed a parallel computer utilizing up to 64 processors that perform the 3D reconstruction algorithm very quickly. The maximum processing power is 3.2 GFLOPS. Data acquisition is done by a video camera at video rate. In order to accommodate the large amount of image data, we also developed a frame buffer. The buffer can be used as a shared memory for all parallel processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for a pseudo-colored smooth representation of evaluated EEG parameters on a three-dimensional reconstruction of a proband's cortex is proposed. The EEG data are obtained through standard measurements and are subsequently Fourier analyzed in order to transform them to parameters representing the signals' power and coherence changes with respect to the averaged EEG at rest. The morphological data for the 3D-reconstruction of the brain is gained through MRI-scans of the head. The three-dimensional reconstruction of the cortex is achieved by means of a graylevel gradient shading method. During rendering, each brain surface voxel (volume element) is associated with a suitable parameter value determined through an inverse distance-weighted interpolation scheme from the values evaluated for its neighbors in the Delaunay triangulation mesh between the electrodes. Following the interpolation, a mapping of the calculated surface value in the HSV color space is employed in order to achieve an expressively colored brain surface with well perceptible distinct activation regions and smooth transitions between them. The presented method provides a possibility for direct, visual comparisons of the activated brain regions of a healthy individual.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The graphic locator is an application for planning MRI image acquisitions. The planned protocol, represented by a three-dimensional model, is drawn relative to previously acquired images, and can be manipulated while displayed in several images simultaneously. We describe visualization methods to accurately position acquisitions within anatomic volumes, though the techniques apply to any volumetric application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In neurosciences, 3D renderings of the human brain cortex based on MR tomographical measurements are often used to study cortical structures, their similarity or variability, or to depict surface distribution of a given physical quantity. We have developed a method for producing maps of the human cortex depicting the complete brain surface in one view. The mapping is based on casting rays normal to the skin surface of the head. The projection surface is then remapped to the plane. An analytical model of the head consisting of four Bezier patches is used for generating the normal rays. The contribution describes the structure of the model and its computation, the projection geometry of the mapping, and the details of the rendering phase. Examples of possible applications of the method are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous papers, we presented an overlapped transform-coding method for efficient data compression of medical x-ray image series. The proposed method is a lossy compression method. The distortion that is introduced by the compression is determined by the step size with which the transform coefficients are quantized. The number of bits produced per image depends on the amount of detail in the image. In principle, highly detailed images produce higher bit rates than less detailed images. For applications in which only a small number of images are recorded, the amount of time needed to store or transmit these images may not be an essential factor. In that case, some fluctuation in the bit rate may be tolerable. But for applications in which image series have to be stored or transmitted in real time, a constant bit rate is often preferred. In this paper, we explain how compression at a constant bit rate can be achieved. We propose a bit-rate control method that is capable of realizing a constant number of bits per image with a homogeneous distribution of the quantization errors over the coded image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a fast, reversible compression algorithm based on wavelet-like filter banks and arithmetic coding (AC) for primary diagnostic teleradiology using inexpensive, off-the- shelf hardwares. This new method offers routine 2:1 to 4:1 compression ratios, while it requires only a few seconds processing time on a microcomputer. This compression performance will make affordable primary diagnostic teleradiology over standard phone lines or ISDN possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Utilizing advances in camera technology and electronic components while developing an optimized system architecture resulted in the development of a 2048 line by 2048 pixel by 10- bit high-speed image processor for digital fluoroscopy. The image processor is capable of image acquisition of progressive or interlaced 2 K images at 7.5 frames per second, as well as true progressive or interlaced 1 K by 1 K image acquisition at 30 frames per second. High- speed components, some specifically designed for the system, are applied to perform 2048 line by 2048 pixel image processing at the required speeds. A multimode high-resolution TV camera with a 2000 line Plumbicon tube is used and the input video samples at 40 MHz to provide 10-bit digital image data. High-speed BTL imaging busses, 2 K video RAMs, and multiple processors are used within the system architecture to provide required processing bandwidth. Images are compressed using 2 to 1 lossless compression, and optionally lossy compression, to increase system performance and provide a cost-effective method to achieve required image storage capacity. A high resolution monitor is used for image display and a standard digital interface for hardcopy is provided which is capable of 2 K image transfer. A VME based CPU with a real-time multitasking operating system is used for system control and image management. The system architecture provides multiple image processing busses designed to provide simultaneous acquisition, review, and hardcopy operations. Functionally, the system architecture supports image acquisition and digitization, real-time image processing and display, image storage to RAM, archival to a hard drive, and hardcopy of an image to a digital laser. In addition, interfaces wit the x-ray generator and user interface devices are provided. The system may be configured to support multiple fluoroscopic suites, display configurations, and user interface stations. The 2048 line by 2048 pixel high-speed image processor described has been implemented, released, and shipped. Systems being used for general fluoroscopy, interventional fluoroscopy, and angiographic procedures are producing clinical results. The results confirm the increased resolution, system performance, and multiprocessing capabilities not previously achieved in a cost effective digital image processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses both the adaptation to medical image of the JPEG base-line system by optimizing the normalization array and Huffman tables and its extension by adaptation of the block size to the image correlation lengths. Adaptation of the JPEG algorithm to each medical image or modality results in a significant improvement of the decompressed image quality with a low excess of computing cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer-controlled light box (14' X 17') with uniform luminance across the field at selectable levels was assembled as a part of a study to determine the combined effects of brightness and resolution on observer performance. Six horizontal and two vertical hot- cathode type (T8) lamps were used with direct current drive to eliminate flicker. Polarity of the voltage across each lamp is alternated to prevent migration of the mercury to one end. An optical feedback system accurately controls the overall luminance. Silicon detectors monitor the light from the first of two diffusing screens and provide analog feedback for rapid luminance correction. The light box is connected and controlled via the parallel port of the computer that handles observer performance databases and scoring forms. One of three luminance selections is performed by the computer, and a check of luminance is done prior to every case read in the study. The luminance spatial variation is within 5% over the central 80% of the field, and luminance levels are maintained within 5% of the predetermined levels of 225, 75, and 25 ft-L used in the observer performance study. Using conventional technology, uniform luminance across the field at selectable levels is achieved and eliminates potential effects of non-uniformities on observer performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cost effectiveness study on the feasibility of using computed radiography (CR) instead of screen-film methods for portable radiographs indicates that we could only justify CR if film were eliminated. Before purchasing CR equipment, we needed to evaluate the use of softcopy to replace film for routine clinical use. The evaluation had to cover image quality, human factors, and efficiency measures. Screen-film radiographs were digitized and used to simulate CR in two studies. The first study evaluated the quality of digitized images and the workstation user interface. Twenty-one radiographs were selected at random from scopes in the radiology department, were digitized, and transferred to a megascan workstation. Five radiologists were asked to assess the quality of the images and the ease of operation of the workstation while an observer recorded their comments and scores. The second study evaluated the feasibility of using the workstation in a clinical environment. Four radiologists read adult and pediatric portable images in film and softcopy format. Reports were evaluated for differences and timing statistics were kept. The results of the first study indicate that image quality may be acceptable for diagnostic purposes and suggests some changes in the user interface. Newborn infant images were the least acceptable in softcopy, largely due to magnification artifacts introduced when viewing very small images. The evaluation was based on a digitizer as a simulator for a CR unit and the digitizer did not exhibit the same resolution characteristics as CR. Films that were unacceptable from the digitizer are expected to be acceptable with CR. The results of the second study indicated that the high resolution diagnostic workstation could be used in a clinical setting, and that the diagnostic readings were not significantly different between film and softcopy displays. The results also indicated that, depending on the radiologist and the type of images, more time was required to read from the workstation and that the increased time was spent using window/level and magnification/roam functions. This preliminary study suggests that the high resolution workstation developed at the University of Florida has adequate quality and functionality to be used for diagnostic interpretation of portable radiographs if given high resolution images. However, further investigation is indicated before we eliminate film in a CR environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaginer is a graphical user interface currently being developed for automated analysis of emission images. It will be the first application to implement a new feature extraction method of contiguous volume analysis on an unlimited number of image formats. Its development was prompted by the desire to simplify the steps involved in that analysis and to improve visualization and interpretation of results through a graphical user interface. This paper discusses difficulties that have arisen in generalizing the method of contiguous volume analysis to work with an unlimited number of image formats, as well as in abstracting the visualization techniques to effectively represent all types of data used during analysis. Issues in creating a flexible, intuitive, and extensible user interface for scientific investigation and clinical use are discussed, along with several usability issues that have arisen during development. Prototypes of Imaginer and its software components are described. Designed for ease of use, flexibility, extensibility and portability, Imaginer will enable users to assess the appropriateness of this method of feature extraction for various clinical and research purposes, and to use it in contexts for which it is found to be appropriate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode computed tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new interface technique which augments a 3D user interface based on the physical manipulation of tools, or props, with a touchscreen. This hybrid interface intuitively and seamlessly combines 3D input with more traditional 2D input in the same user interface. Example 2D interface tasks of interest include selecting patient images from a database, browsing through axial, coronal, and sagittal image slices, or adjusting image center and window parameters. Note the facility with which a touchscreen can be used: the surgeon can move in 3D using the props, and then, without having to put the props down, the surgeon can reach out and touch the screen to perform 2D tasks. Based on previous work by Sears, we provide touchscreen users with visual feedback in the form of a small cursor which appears above the finger, allowing targets much smaller than the finger itself to be selected. Based on our informal user observations to date, this touchscreen stabilization algorithm allows targets as small as 1.08 mm X 1.08 mm to be selected by novices, and makes possible selection of targets as small as 0.27 mm X 0.27 mm after some training. Based on implemented prototype systems, we suggest that touchscreens offer not only intuitive 2D input which is well accepted by physicians, but that touchscreens also offer fast and accurate input which blends well with 3D interaction techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet-based image compression is receiving significant attention, largely because of its potential for good image quality at low bit rates. In medical applications, low bit rate coding may not be the primary concern, and it is not obvious that wavelet techniques are significantly superior to more established techniques at higher quality levels. In this work we present a straightforward comparison between a wavelet decomposition and the well-known discrete cosine transform decomposition (as used in the JPEG compression standard), using comparable quantization and encoding strategies to isolate fundamental differences between the two methods. Our focus is on the compression of single-frame, monochrome images taken from several common modalities (chest and bone x-rays and mammograms).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the medical community has seen a growing need for the development of rapid and efficient techniques for the storage and transmission of medical images. Although several well known compression techniques exist, many of them require computationally intensive algorithms. In addition, many image compression techniques introduce unwanted attributes such as the blocking effect or noise. This is a major problem in medical imaging where image degradation may be critical. The aim of our research was to combine two recent techniques, wavelet transforms (WT) and variable block size coding (VBSC), to improve compression ratios as well as visual quality. Multiresolution wavelet transforms are capable of extracting salient features from the image, and thus, allow the decision on the size of block coding. In addition, this hybrid technique reduces noise in the reconstructed images. The image quality is judged by criteria such as entropy, mean square error, signal to noise ratio, and human visual perception. The performance of the hybrid technique is based on the above criteria and compared with the performance of the standard JPEG compression technique. This hybrid compression technique yields improved compression while retaining high visual quality for specific medical images such as cervical radiographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present two new algorithms for visualization of multi attribute medical images. The aim of the algorithms is to provide as much information as possible from the multi attribute image in one gray scale or color image without making any rigid classification into different tissue categories. Gray scale images are of special interest as the human eye is considerably more sensitive to spatial variations in intensity than chromatic variations. A nonlinear mapping is made from the original N-dimensional feature space to a M-dimensional output space where M < N and M (epsilon) {1..3}. Two different nonlinear projection methods are investigated for this purpose. We first present a method based on Sammon's nonlinear projection algorithm. Sammon's algorithm is a gradient descent strategy which aims at preservation of inter pattern distances by minimizing a cost function which measures the so-called Sammon stress. To reduce computational complexity, we first find a set of X reference vectors in feature space by using a standard clustering technique such as the c- means algorithm. Each feature vector in N-space is associated with its nearest reference vector which we then map to a lower dimensional M-space by using Sammon's algorithm. Finally, we introduce a new algorithm which can be used to create gray scale images when the number of reference vectors is sufficiently small. The original multi attribute data is then projected onto a curve in feature-space defined by an ordered set of reference vectors, and a gray scale is mapped along this curve. The optimal ordering of the reference vectors is found as a minimal cost permutation, where the cost function is a weighted sum of inter pattern distances in N space. Our algorithms are compared to principal component analysis (PCA) and a recently published algorithm based on Kohonens self organizing maps. The usefulness of the new algorithms are demonstrated for visualization of both reproducible synthetic images and real MR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laparoscopic and endoscopic surgery rely uniquely on high quality display of acquired images, but a multitude of problems plague the researcher who attempts to reproduce such images for educational purposes. Some of these are intrinsic limitations of current laparoscopic/endoscopic visualization systems, while others are artifacts solely of the process used to acquire and reproduce such images. Whatever the genesis of these problems, a glance at current literature will reveal the extent to which endoscopy suffers from an inability to reproduce what the surgeon sees during a procedure. The major intrinsic limitation to the acquisition of high-quality still images from laparoscopic procedures lies in the inability to couple directly a camera to the laparoscope. While many systems have this capability, this is useful mostly for otolaryngologists, who do not maintain a sterile field around their scopes. For procedures in which a sterile field must be maintained, one trial method has been to use a beam splitter to send light both to the still camera and the digital video camera. This is no solution, however, since this results in low quality still images as well as a degradation of the image that the surgeon must use to operate, something no surgeon tolerates lightly. Researchers thus must currently rely on other methods for producing images from a laparoscopic procedure. Most manufacturers provide an optional slide or print maker that provides a hardcopy output from the processed composite video signal. The results achieved from such devices are marginal, to say the least. This leaves only one avenue for possible image production, the videotape record of an endoscopic or laparoscopic operation. Video frame grabbing is at least a problem to which industry has applied considerable time and effort to solving. Our own experience with computerized enhancement of videotape frames has been very promising. Computer enhancement allows the researcher to correct several of the shortcomings of both laparoscopic video systems and videotapes, namely color imperfections, scanline problems, and lack of image resolution for later display. We present a history of laparoscopic imaging, the current state of the art, and future prospects for high-resolution images from laparoscopic and endoscopic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design of an improved image compression algorithm based on an optimal spatial and frequency decomposition of images. The use of spatially varying wavelet packets for a generalized wavelet decomposition of images was recently introduced by Asai, Ramchandram and Vetterli. They use a `double tree' algorithm to obtain the optimal set of bases for a given image, through a joint optimization with respect to frequency decomposition by a wavelet packet and spatial decomposition based on a quad-tree structure. In this paper, we present a `double-tree' frequency and spatial decomposition algorithm that extends the existing algorithm in three areas. First, instead of the quad-tree structure, our algorithm uses a more flexible merging scheme for the spatial decomposition of the image. Second, instead of a scalar quantizer, we use a pyramidal lattice vector quantizer to represent each subband of each wavelet packet, which improves the coding efficiency of the representation. Both of these extensions yield an improved rate-distortion (R-D) performance. Finally, our algorithm uses a scheme that gives a good initial value for the slope of the R-D curve, reducing the total computations needed to obtain the optimum decompositions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.