A protocol for photoacoustic imaging (PAI) has been developed to assess pixel-based oxygen saturation (sO2) distributions of experimental tumor models. The protocol was applied to evaluate the dependence of PAI results on measurement settings, reproducibility of PAI, and for the characterization of the oxygenation status of experimental prostate tumor sublines (Dunning R3327-H, -HI, -AT1) implanted subcutaneously in male Copenhagen rats. The three-dimensional (3-D) PA data employing two wavelengths were used to estimate sO2 distributions. If the PA signal was sufficiently strong, the distributions were independent from signal gain, threshold, and positioning of animals. Reproducibility of sO2 distributions with respect to shape and median values was demonstrated over several days. The three tumor sublines were characterized by the shapes of their sO2 distributions and their temporal response after external changes of the oxygen supply (100% O2 or air breathing and clamping of tumor-supplying artery). The established protocol showed to be suitable for detecting temporal changes in tumor oxygenation as well as differences in oxygenation between tumor sublines. PA results were in accordance with histology for hypoxia, perfusion, and vasculature. The presented protocol for the assessment of pixel-based sO2 distributions provides more detailed information as compared to conventional region-of-interest-based analysis of PAI, especially with respect to the detection of temporal changes and tumor heterogeneity.
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
KEYWORDS: Magnetic resonance imaging, Sensors, Signal to noise ratio, Magnetism, Imaging systems, Resonators, Light sources, Scanners, Spatial resolution, Magnetic sensors
A noncontact optical detector for in vivo imaging has been developed that is compatible with magnetic resonance imaging (MRI). The optical detector employs microlens arrays and might be classified as a plenoptic camera. As a resulting of its design, the detector possesses a slim thickness and is self-shielding against radio frequency (RF) pulses. For experimental investigation, a total of six optical detectors were arranged in a cylindrical fashion, with the imaged object positioned in the center of this assembly. A purposely designed RF volume resonator coil has been developed and is incorporated within the optical imaging system. The whole assembly was placed into the bore of a 1.5 T patient-sized MRI scanner. Simple-geometry phantom studies were performed to assess compatibility and performance characteristics regarding both optical and MR imaging systems. A bimodal ex vivo nude mouse measurement was conducted. From the MRI data, the subject surface was extracted. Optical images were projected on this surface by means of an inverse mapping algorithm. Simultaneous measurements did not reveal influences from the magnetic field and RF pulses onto optical detector performance (spatial resolution, sensitivity). No significant influence of the optical imaging system onto MRI performance was detectable.
KEYWORDS: Sensors, Microlens, Data acquisition, Image sensors, In vivo imaging, 3D image processing, Geometrical optics, Imaging systems, Optical engineering, Binary data
This article proposes a surface reconstruction method from multiview projectional data acquired by means of a rotationally mounted microlens array-based light detector (MLA-D). The technique is adapted for in vivo small animal imaging, specifically for imaging of nude mice, and does not require an additional imaging step (e.g., by means of a secondary structural modality) or additional hardware (e.g., laser-scanning approaches). Any potential point within the field of view (FOV) is evaluated by a proposed photo-consistency measure, utilizing sensor image light information as provided by elemental images (EIs). As the superposition of adjacent EIs yields depth information for any point within the FOV, the three-dimensional surface of the imaged object is estimated by a graph cuts-based method through global energy minimization. The proposed surface reconstruction is evaluated on simulated MLA-D data, incorporating a reconstructed mouse data volume as acquired by x-ray computed tomography. Compared with a previously presented back projection-based surface reconstruction method, the proposed technique yields a significantly lower error rate. Moreover, while the back projection-based method may not be able to resolve concave surfaces, this approach does. Our results further indicate that the proposed method achieves high accuracy at a low number of projections.
KEYWORDS: Luminescence, 3D image processing, Imaging systems, Single photon emission computed tomography, Data acquisition, 3D acquisition, Image restoration, In vivo imaging, Computed tomography, Tomography
The adoption of axially oriented line illumination patterns for fluorescence excitation in small animals for fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is being investigated. A trimodal single-photon-emission-computed-tomography/computed-tomography/optical-tomography (SPECT-CT-OT) small animal imaging system is being modified for employment of point- and line-laser excitation sources. These sources can be arbitrarily positioned around the imaged object. The line source is set to illuminate the object along its entire axial direction. Comparative evaluation of point and line illumination patterns for FSI and FOT is provided involving phantom as well as mouse data. Given the trimodal setup, CT data are used to guide the optical approaches by providing boundary information. Furthermore, FOT results are also being compared to SPECT. Results show that line-laser illumination yields a larger axial field of view (FOV) in FSI mode, hence faster data acquisition, and practically acceptable FOT reconstruction throughout the whole animal. Also, superimposed SPECT and FOT data provide additional information on similarities as well as differences in the distribution and uptake of both probe types. Fused CT data enhance further the anatomical localization of the tracer distribution in vivo. The feasibility of line-laser excitation for three-dimensional fluorescence imaging and tomography is demonstrated for initiating further research, however, not with the intention to replace one by the other.
KEYWORDS: Sensors, Imaging systems, Positron emission tomography, Data acquisition, In vivo imaging, Optical imaging, Magnetic resonance imaging, Microlens array, Tomography, 3D image reconstruction
A complete mathematical framework for preclinical optical imaging (OI) support comprising bioluminescence imaging (BLI), fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is presented in which optical data is acquired by means of a microlens array (MLA) based light detector (MLA-D). The MLA-D has been developed to enable unique OI, especially in synchromodal operation with secondary imaging modalities (SIM) such as positron emission tomography (PET) or magnetic resonance imaging (MRI). An MLA-D consists of a (large-area) photon sensor array, a matched MLA for field-of-view definition, and a septum mask of specific geometry made of anodized aluminum that is positioned between the sensor and the MLA to suppresses light cross-talk and to shield the sensor's radiofrequency interference signal (essential when used inside an MRI system). The software framework, while freely parameterizable for any MLA-D, is tailored towards an OI prototype system for preclinical SIM application comprising a multitude of cylindrically assembled, gantry-mounted, simultaneously operating MLA-D's. Besides the MLA-D specificity, the framework incorporates excitation and illumination light-source declarations of large-field and point geometry to facilitate multispectral FSI and FOT as well as three-dimensional object recognition. When used in synchromodal operation, reconstructed tomographic SIM volume data can be used for co-modal image fusion and also as a prior for estimating the imaged object's 3D surface by means of gradient vector flow. Superimposed planar (without object prior) or surface-aligned inverse mapping can be performed to estimate and to fuse the emission light map with the boundary of the imaged object. Triangulation and subsequent optical reconstruction (FOT) or constrained flow estimation (BLI), both including the possibility of SIM priors, can be performed to estimate the internal three-dimensional emission light distribution. The framework is susceptible to a number of variables controlling convergence and computational speed. Utilization and performance is illustrated on experimentally acquired data employing the OI prototype system in stand-alone operation, and when integrated into an unmodified preclinical PET system performing synchromodal BLI-PET in vivo imaging.
A micro-lens array based optical detector (MLA-D) has been developed for preclinical in vivo optical imaging applications. While primarily intended for detecting signals from molecular optical probes within living subjects (mice), the MLA-D also can be used effectively to capture the surface of the imaged object in three dimensions from only a few projection angles - a feature that is very important for in vivo optical imaging. In order to study the shape recognition ability of the MLA-D design we have developed a ray-tracing simulation framework. The impact of the following physical MLA-D parameters on surface recognition efficiency can be studied: micro-lens diameter, micro-lens focal length, and sensor pixel size. By using this framework the performance of two surface recognition algorithms - the optical flow method and the multi-projection surface reconstruction (back-projection) method - has been assessed within the specific context of preclinical imaging application. By way of example, the commonly used DigiMouse dataset is adopted to generate simulated raw image data. Results of the simulation framework conform well with the depth-of-field theory, and both surface recognition methods yield comparable, but unsatisfactory results. Whereas the optical flow method reveals the relative shape of the phantom at a comparatively lesser spatial and depth resolution, the back-projection method, while providing higher resolution data, could not resolve concave regions in all cases which needs further investigation. Very promising preliminary results have been attained, however, with the multi-view stereo algorithm that has been implemented most recently.
KEYWORDS: Sensors, Spatial resolution, Modulation transfer functions, Spatial frequencies, In vivo imaging, Prototyping, Geometrical optics, Radio optics, Solids, Cameras
In order to validate and to optimize the imaging capabilities of a micro-lens-array (MLA) based optical detector dedicated for preclinical in-vivo small animal imaging applications a numeric investigation framework is developed. The framework is laid-out to study the following MLA detector parameters: micro-lens diameter (D) and focal length (f), as well as sensor pixel size (A). Two mathematical models are implemented for light modeling: line-based and cone-based ray projections. Since the MLA detector requires mathematical postprocessing, specifically inverse mapping for image formation, the framework is fully integrated into such approach. MLA detector designs have been studied within valid parameter ranges yielding sub-millimeter spatial resolution for in vivo imaging of mice for detector-object-distances (t) up to 50 mm. In summary, there is a non-linear dependency of the detector's spatial resolution, scaling with D and f, for any respective t. On the other hand, detector efficiency is strongly dependent on f. Regardless of mathematical postprocessing the following set of intrinsic detector parameters had been found optimal for the intended application: D = 0.336 mm, f = 4.0 mm, A = 0.048 mm.
When mathematical postprocessing is involved, particularly three-dimensional surface recognition, increasing f (cf. decreasing D) yields solid angles of the incoming rays closer to 90° and, thus, will decrease spatial depth information from the elementary images. Hence, a setup with D not larger than 0.5 mm and f between 2.0 mm and 3.0 mm is recommended.
Recently, we have presented a thin optical detector assembly consisting of a microlens array (MLA) coupled to
a large area CMOS sensor through a septum mask. The sensor is placed in the physical focal plane of the MLA.
Each lens of the MLA forms a small image on the sensor surface, with individual images being separated from
each other by the septum mask. The resulting sensor image thus shows a multitude of small sub-images. A
low-resolution image can be attained by extracting only those pixels that are located on the optical axis of a
microlens, as reported previously. Herein we describe an improved post-processing method to extract images of
higher resolution (which can be focused to an arbitrary plane) from a single raw sensor image: Each lens of the
MLA results in a mapping from points in object space to corresponding sensor pixels. By tracing back the light
paths from sensor pixels through the lenses onto an arbitrary focal plane in object space this mapping can be
inverted. Intensities captured on individual sensor pixels can be attributed to virtual pixels on that focal plane
using the computed inverse mapping.
As a result, from a single acquisition by the detector, images focused to any plane in object space can be
calculated. In contrary to the approach of extracting focal point intensities, the spatial resolution is not limited
by microlens pitch. We present experimental examples of extracted images at various object plane distances and
studies determining the spatial resolution.
An optical detector suitable for inclusion in tomographic arrangements for non-contact in vivo bioluminescence
and fluorescence imaging applications is proposed. It consists of a microlens array (MLA) intended for field-of-view definition, a large-field complementary metal-oxide-semiconductor (CMOS) chip for light detection, a septum mask for cross-talk suppression, and an exchangeable filter to block excitation light. Prototype detector
units with sensitive areas of 2.5 cm x 5 cm each were assembled. The CMOS sensor constitutes a 512 x 1024 photodiode matrix at 48 μm pixel pitch. Refractive MLAs with plano-convex lenses of 480 μm in diameter and pitch were selected resulting in a 55 x 105 lens matrix. The CMOS sensor is aligned on the focal plane of
the MLA at 2.15mm distance. To separate individual microlens images an opaque multi-bore septum mask
of 2.1mm in thickness and bore diameters of 400 μm at 480 μm pitch, aligned with the lens pattern, is placed
between MLA and CMOS. Intrinsic spatial detector resolution and sensitivity was evaluated experimentally as a
function of detector-object distance. Due to its small overall dimensions such detectors can be favorably packed
for tomographic imaging (optical diffusion tomography, ODT) yielding complete 2 π field-of-view coverage. We
also present a design study of a device intended to simultaneously image positron labeled substrates (positron
emission tomography, PET) and optical molecular probes in small animals such as mice and rats. It consists of
a cylindrical allocation of optical detector units which form an inner detector ring while PET detector blocks
are mounted in radial extension, those gaining complementary information in a single, intrinsically coregistered
experimental data acquisition study. Finally, in a second design study we propose a method for integrated optical
and magnetic resonance imaging (MRI) which yields in vivo functional/molecular information that is intrinsically
registered with the anatomy of the image object.
Can time-resolved, high-resolution data as acquired by an intensified gated CCD camera (ICCD) aid in the tomographic
reconstruction of fluorescence concentration? Usually it is argued that fluorescence is a linear process and thus does not
require non-linear, time-dependent reconstructions algorithms, unless absorption and scattering coefficients need to be
determined as well. Furthermore, the acquisition of a number of time frames is usually prohibitive for fluorescence
measurements, at least in small animals, due to the increased total measurement time. On the other hand, it is obvious
that diffusion is less pronounced in images at early gates, due to selective imaging of photons of lower scatter order. This
will be the case also for photons emitted by fluorescent sources. Early-gated imaging might increase the contrast in
acquired images and could possibly improve fluorescence localization. Herein, we present early gated fluorescence
images obtained from phantoms and compare them to continuously acquired data. Increased contrast between
background and signal maximum can be observed in time-gated images as compared to continuous data. To make use of
the properties exhibited by early gated frames, it is necessary to use a modified reconstruction algorithm. We propose a
variant of the well-known Born approximation to the diffusion equation that allows to take into account single time
frames. The system matrix for the time-dependent Born approach is more complex to calculate, however the complexity
of the actual inverse problem (and the acquisition times) of single-frame reconstructions remains the same as compared
to continuous mode.
Non-contact detection schemes for optical or fluorescence tomography offer several advantages compared to classic approaches, most importantly the ability to obtain images with a CCD in the absence of a matching fluid or fiber optics. This allows the acquisition of high density datasets, as well as simplified experimental procedures. Herein we create a unified framework for contact and non-contact detection procedures and present experimental results that show the ability of the non-contact method to quantify the concentration of fluorochromes hidden in turbid media as well as the improvement in image quality between conventional and non-contact detection.
By the interpretation of the segmentation process as an assignment of labels to objects dependent on spatial constraints, image segmentation can be described as a constraint satisfaction problem (CSP). Starting from this model, a new technique for the segmentation of medical images is presented: the constraint satisfaction synergetic potential network (CSSPN). In CSSPN the actually possible labels of an object are represented by singular points of synergetic potential systems. The fuzzy-algorithmic initialization model of the CSSPN allows a label-number-independent dimensioning of the network with n2 nodes. The parallel relaxation dynamics of the CSSPN controlled by interactions of the potential systems will bring selection or evolution of the input image by complete deterministic or stochastically perturbed equations of motion in the potential systems. Constraint functions are significant to the relaxation dynamics and to the result of segmentation within an object adjacency, information of the image model like the image semantics or the optimization strategy of network parameters are mapped onto the CSP with them. Experimental comparative analyses of the segmentation results demonstrate the efficiency of the technique and confirm that the CSSPN is a very promising method for image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.