Quantitative photoacoustic imaging (QPAI) is a hybrid imaging technique aimed at reconstructing optical parameters from photoacoustic signals detected around the biological tissues. The recovery of optical parameters is a nonlinear, ill-posed inverse problem which is usually solved by iterative optimization methods based on the error minimization strategy. Most of the iterative algorithms are empirical and computationally expensive, leading to inadequate performance in practical application. In this work, we propose a deep learning-based QPAI approach to efficiently recover the optical absorption coefficient of biological tissues from the reconstructed result of initial pressure. The method involves a U-Net architecture based on the fully convolutional neural network. The Monte Carlo simulation with the wide-field illumination has been used to generate simulation data for the network training. The feasibility of the proposed method was demonstrated through numerical simulations, and its applicability to quantitatively reconstruct the distribution of optical absorption in the practical situation is further verified in phantom experiments. High image performance of this method in accuracy, efficiency and fidelity from both simulated and experimental results, suggests the enormous potential in biomedical applications in the future.
The characteristics of the transducer, such as the transducer shape, have a significant impact on the image performance in optoacoustic (photoacoustic) imaging. Several reconstruction algorithms have considered the shape of the transducer in the optoacoustic reconstruction process, showing the improvement in image quality compared to reconstruction procedures with the point detector approximation. One flexible approach assumes the surface of transducer that consists of a set of surface elements. However, this approach suffers from long computation time and excessive memory consumption, especially for model-based reconstruction strategies. Herein, we present a modified model-based reconstruction algorithm using a virtual parallel-projection method, for the optoacoustic imaging system with flat detector. In this case, the sum of the surface elements' model matrixes can be replaced by a virtual parallel-projection model matrix, in order to reduce the reconstruction time and memory consumption. The proposed method has been performed on numerical simulations, phantom experiments of microspheres with the diameter of 200 μm and in vivo experiments in mice. The reconstruction results of proposed method show the similar image quality as the results of the traditional reconstruction method setting surface elements, while the computation time and memory requirements have been efficiently decreased.
Quantitative photoacoustic tomography (q-PAT) is a nontrivial technique can be used to reconstruct the absorption image with a high spatial resolution. Several attempts have been investigated by setting point sources or fixed-angle illuminations. However, in practical applications, these schemes normally suffer from low signal-to-noise ratio (SNR) or poor quantification especially for large-size domains, due to the limitation of the ANSI-safety incidence and incompleteness in the data acquisition. We herein present a q-PAT implementation that uses multi-angle light-sheet illuminations and a calibrated iterative multi-angle reconstruction. The approach can acquire more complete information on the intrinsic absorption and SNR-boosted photoacoustic signals at selected planes from the multi-angle wide-field excitations of light-sheet. Therefore, the sliced absorption maps over whole body can be recovered in a measurementflexible, noise-robust and computation-economic way. The proposed approach is validated by the phantom experiment, exhibiting promising performances in image fidelity and quantitative accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.