1.IntroductionRegulation and coordination of cell shapes are central to native physiology during all stages of organismal existence. Recent advances in optical imaging have provided mechanistic insights into such phenomena by revealing cellular features and processes with previously unimagined detail.1,2,3 Central to the accurate analysis of such intricate biological processes is the precise segmentation of cellular images. Quantifying cellular morphology, such as shape, area, circularity, aspect ratio, and others, starts with first segmenting the cells in a given field of view. Due to its unquestionable significance, much work has been done to standardize the process. There are well-developed open-source software suites, notably CellProfiler4 and CellPose,5 to perform such segmentation tasks with great accuracy. The recent update to CellProfiler includes the functionality of three-dimensional (3D) image segmentation and is currently the most widely used tool to perform such tasks. However, since the current workhorse for biological imaging is fluorescence microscopy, all standard segmentation software are optimized for and targeted to the analysis of fluorescence images. Yet, contrast-agent-free microscopy is highly desirable to study the dynamics and physiological activity of various structures in living cells. Quantitative phase imaging (QPI) measures optical field images using laser-based interferometry and has rapidly emerged as a viable imaging alternative because it offers an objective measure of morphology and dynamics in a label-free manner.1 In addition to the amplitude images provided by conventional intensity-based microscopy techniques, QPI measures optical phase delay maps governed by the refractive index (RI) distribution of a sample. Since the endogenous RI distribution is strongly related to the structural and biochemical characteristics of the cell type, the acquired field images can be analyzed for systematic discovery of cell type-specific morphological and biophysical fingerprints encoded in the images. Over the past two decades, QPI has provided important insights into diverse biological phenomena, ranging from membrane dynamics of red blood cells6 to neuronal activity7 and cell-nanoparticle interactions,8,9 and cell-drug interactions.10 Recently, it has also been shown that QPI images can be mapped to fluorescence images using deep learning techniques, a concept coined as image-to-image translation. Prediction of stains (i.e., where specific fluorophores/stains would bind in an unlabeled specimen) using a combination of QPI and machine learning has been successfully demonstrated,11–13 and gradually, more stains are being added to the library. Indeed, phase imaging with computational specificity has allowed precise measurements of the growth of nuclei and cytoplasm independently, over many days, without loss of viability. Central to many of the aforementioned and other emergent applications is QPI’s intrinsic ability to measure single-cell volume and mass non-destructively and ultra-sensitively over arbitrary periods of time in both adherent and flowing cell populations.1 A critical step in undertaking such analysis is the accurate segmentation of the tomographic images of the cell populations. Since QPI imaging is still a relatively new technique in the field of cell biology, the analysis pipeline is not as developed as it is for fluorescence images. The toolbox developed for fluorescence image segmentation does not work well with QPI images, as the fluorescence contrasts are much sharper than the RI contrast. Also, in some segmentation procedures, the stained nucleus is used as a fiducial marker to define the respective cytoplasm boundary, and consequently, such algorithms cannot be directly implemented in QPI imaging. This has motivated researchers to develop segmentation algorithms tailored for QPI images,14 yet their applicability has been limited to two-dimensional images thus far. The state-of-the-art method used for 3D QPI cell segmentation is an Otsu-based 3D watershed algorithm15 (hereafter referred to as the Otsu threshold algorithm in this work). This algorithm works very well for isolated cell images; however, it is challenging to draw boundaries when the cells are clumped. This process is also memory intensive since the processing requires computation on a 3D stack of images. As a point of reference, using the current state-of-the-art software module, this cell segmentation takes ca. 10 seconds for a 3D stack of images () on a workstation equipped with an 8th generation i7 processor running at 3.7 GHz, 64 GB RAM, and Nvidia GeForce GTX 1080 - 8 GB graphics card. Another recently developed method for 3D QPI cell segmentation is an AI-based cell segmentation tool.16 While this should, in principle, be able to overcome the limitations of the Otsu threshold algorithm, such implementation would require an extensive training dataset with enough complexity to handle all the edge cases. The preparation of such training datasets would also require manual annotations, besides needing separate datasets for different cell types. While an AI-trained system can be implemented on a modest system with a GPU, the training for an AI-based segmentation tool would necessitate a state-of-the-art computational resource. To overcome these drawbacks, we report here a fast and light-weight algorithm for 3D QPI image-based cell segmentation. Our cell segmentation algorithm is inspired by the process of gemstone extraction, where a stonemason starts by making a coarse extrusion cut along a given axis and then gradually carves away near the surface of the gemstone to reveal the final form [Fig. 1(a)]. The algorithm first extrudes a 3D image from a two-dimensional (2D) segmented mask, which can be seen as creating a rough shape of the cell structure. A 2D image is generated through maximum intensity projection (MIP) of the 3D image, and a 2D cell segmentation algorithm is used to make a 2D mask. This allows us to identify the boundary region for segmentation in the plane. Next, we take advantage of the continuity of cells and the absence of abrupt changes in consecutive -stacks of the 3D image to perform the complete 3D segmentation. This step can be likened to the chiseling process in gemstone extraction, where the stone mason uses finer tools to carve away at the surface. The implication for developing such a method is vast as it can speed up high-throughput single-cell analysis and reduce the computational burden on the hardware. This also lowers the barrier to the adoption of QPI in biological sciences. The growing advancements in hardware development for high-throughput imaging in QPI17,18 also offer significant utility to this method. 2.Materials and Methods2.1.Cell Culture MethodU937 cells were seeded in TomoDishes at about per dish with complete media and PMA (Sigma-Aldrich 79346, ). The complete media consisted of RPMI-1640 (Gibco, 11875-093), 10% heat-inactivated FBS (Corning, 35-010-CV), 1% P/S, 1%L-Glu (Gibco, 200 mM, 25030081), and 1% HEPES (Gibco, 1M, 15630-080). The seeding day is counted as day 0. For M1 polarized cells, on day 3, PMA media is removed, and cells are washed with PBS. Thereafter, polarizing media consisting of complete media with LPS (List Biological Laboratories, 421, ) and IFN-y (Sigma-Aldrich, SRP3093, ) were added to the dish. The cells were incubated for another 2 days. On day 5, the media was removed, and cells were washed with PBS before fixing. For M2 polarized cells, on day 3, PMA media is removed, and cells are washed with PBS. Thereafter, polarizing media consisting of complete media with IL-4 (Sigma-Aldrich, SRP3093, ) and IL-13 (R&D Systems, 213-ILB-005, ) were added to the dish. The cells were incubated for another 2 days. On day 5, the media was removed, and cells were washed with PBS before fixing. The cells were fixed with 4% PFA at room temperature for 15 min. It was then washed with PBS two times. PBS was once again added to fully submerge the fixed cells before imaging. 2.2.Imaging MethodThe measurements were performed on a QPI system (HT-1H, Tomocube Inc., Republic of Korea) comprised of 60× water immersion objective (1.2 NA), an off-axis Mach–Zender interferometer with a 532 nm laser, and a digital micromirror device for tomographic scanning of each cell.19 The 3D RI distribution of the cells was reconstructed from the interferograms using the Fourier diffraction theorem as described previously.20 TomoStudio (Tomocube Inc., Republic of Korea) was used to reconstruct and visualize 3D RI maps and their 2D MIP. 2.3.Design Principle and Workflow
Volume is calculated by counting the number of non-zero pixels in the 3D segmentation mask. 3.Results and DiscussionWe tested our QPI cell segmentation algorithm against the current gold standard of the Otsu-based 3D watershed algorithm provided by the manufacturer with their microscopy software suite. We also used the recent AI-powered segmentation tool22 for cases where the watershed algorithm did not provide satisfactory segmentation. The AI-powered segmentation tool is currently available for a limited number of cell types, including macrophages. Our comparative study, therefore, is with U937-derived macrophages M1 and M2 cells. Since label-free volume and dry mass measurements are unique to QPI images, these two metrics act as suitable benchmarking metrics to compare the performance of our algorithm. Cell experiments were conducted in sparsely populated cell populations in a petri dish, with a confluency of ∼65% [Fig. 5(a)]. This enabled us to obtain high-quality images featuring a single cell within a given field of view. As a result, we could compare our algorithm to the current gold standard with minimal concern for its accuracy, as the latter is plagued by robustness issues in the presence of multiple cells within a given field of view. We observed a dry mass difference of ∼5% for both M1 and M2 cells [Figs. 5(b) and 5(c)], whereas the difference in volume was slightly higher, with a mean value of 8% for M1 and 6% for M2 cells [Figs. 5(d) and 5(e)]. Since the methods used to perform cell segmentation differed significantly, it is encouraging to note that the mean differences in dry mass and volume were both [Figs. 5(f) and 5(g)]. In addition, we have included a table (Table 1) that compares the computation time required by each of these algorithms, demonstrating that our algorithm significantly outperforms the current gold standard even on a platform with modest computational hardware. To put our study in the context of cell segmentation literature, we performed a comparison of our method against the absolute gold standard of manual segmentation. Each 3D cell image in our analysis consisted of a high-resolution stack comprising 208 slices. To establish a ground truth dataset, we manually created annotated masks for 10 cells from M1 ( images) and 10 cells from M2 ( images). Subsequently, we conducted a segmentation accuracy comparison using our CellSNAP algorithm. To ensure thoroughness, we also assessed performance against Cellpose, a currently popular human-in-the-loop-based deep learning tool for cell segmentation. The segmentation accuracy was determined by calculating the ratio of correctly classified pixels to the total number of pixels inside the manually segmented cells. We notice that CellSNAP has a consistent accuracy of above 90% for most of the cells. One likely explanation for the high accuracy observed is that human annotators base their decision to include a particular area as part of the cell on its continuous presence. In cases where certain regions appear fragmented, they are typically classified as cell debris. In the CellSNAP algorithm, we employ a connectivity matrix to facilitate precise segmentation by capturing the connectivity patterns within the cell. The performance of the Cellpose algorithm varies, succeeding in certain cases while exhibiting significant failures in others. This discrepancy arises from the fact that the network has not been specifically trained on our dataset, and we are utilizing the default weights. Although retraining the Cellpose network tailored to our requirements is certain to enhance its performance, this approach proves time-consuming, as each cell image necessitates manual annotation of 208 image slices. In the Supplementary Material, we have provided a comparison based on other metrics, such as intersection over union (Fig. S1 in the Supplementary Material), Dice coefficient (Fig. S2 in the Supplementary Material), and max Hausdorff distance (Fig. S3 in the Supplementary Material) for the cells shown in Fig. 3. We embarked on developing a cell segmentation algorithm due to the limitations of the current method in segmenting cells when they are clumped together within a field of view. Moreover, the current method’s performance is hampered in suboptimal imaging conditions. Given that QPI imaging is widely used for longitudinal studies, temperature drifts, stage drifts, and calibration drifts are inevitable, rendering the current segmentation algorithms unsuitable for such scenarios, with resulting imaging data being either unusable or requiring extensive manual processing. In this regard, we present two instances where our algorithm’s robustness is evident. In cases where multiple cells are clumped together and share boundaries, the existing Otsu thresholding fails to distinguish between the cells, resulting in a single unit being segmented [Fig. 4(a)]. Although the AI segmentation tool offers some promise, as evidenced by the accurate segmentation of one cell in Fig. 4(a), it fails to accurately segment the remaining cells. By contrast, our segmentation algorithm accurately segments the cells and outperforms existing methods in terms of computational speed. The segmentation accuracy for each image slice can be viewed in Video 1. Table 1Speed comparison for different methods.
In another example, the image captured has interferogram patterns on the image. This typically arises from calibration drift when performing time-lapse over extended periods. Removing this error with just an Otsu threshold proves challenging, given the similarity between pixel values in the interferogram shadows and the cell. A pattern recognition conditional statement may be used to remove this error along with the Otsu threshold, but we are not aware of any implementation of such an algorithm. While AI segmentation tools can be trained with datasets featuring interferogram artifacts, such implementations are computationally intensive and demanding. Moreover, such errors are prevalent for tools based on trained models when presented with unique cases.23 Introducing enough variations in the training dataset and retraining the algorithm can overcome this, but this process has limitations, as the machine learning algorithm seldom provides details on the methods.24 Debugging is also difficult if the trained model influences other cases. Nevertheless, CellSNAP offers a possibility of segmenting such cell images as well [Fig. 4(b)]. While promising, CellSNAP’s straightforward approach also brings about certain limitations. One notable limitation is its inability to segment multiple cells along the -axis, as the algorithm assumes the presence of only one cell in the -stack. This restriction renders the algorithm unsuitable for scenarios where multiple cells are clustered along the -axis, a situation commonly found in spheroids and organoids. To the best of our knowledge, the only way to perform 3D segmentation in spheroids or organoids with quantitative phase images is available via deep learning25 where training dataset comprised of paired QPI and fluorescence images. The QPI images were then translated to fluorescence images through a U-Net architecture, which is then used to segment cells, skipping the need for manual segmentation of QPI images. Another limitation arises in cases of high clustering density, where the algorithm struggles to segment cells due to its assumption of a single cell along the extruded MIP volume. In instances of high-density clustering with interleaved cell structures along the -stack, the segmentation becomes challenging. The third scenario where the algorithm faces challenges is when the continuity of dry components of cells is disrupted. Certain cell types, during differentiation or specific functional states, may experience water uptake, resulting in a significant increase in volume. In QPI images, the dark appearance of the water background leads to a discontinuous distribution of dry components, causing CellSNAP to fail in appropriately segmenting cells in such cases. 4.ConclusionQPI is a powerful imaging tool that is making major strides and has found numerous applications in basic biological studies and applied clinical research. As QPI utilizes the optical path length as intrinsic contrast, the imaging is noninvasive and, thereby, allows for monitoring live cell samples over several days without concerns of degraded viability. Therefore, significant recent attention has been focused on developing robust analysis pipelines for quantitative phase images, including the application of convolutional neural networks for computationally substituting chemical stains for cells, extracting biomarkers of interest, and enhancing imaging quality. Yet, 3D segmentation of cells, particularly clumped cells, presents a major challenge, as the existing methods work well only for isolated cells. In this work, we have shown that our cell segmentation algorithm for QPI images outperforms the existing gold standard both in terms of speed and robustness. Our algorithm takes about 2 s per cell on a single-core processor to perform the segmentation. This can easily be parallelized on a multi-core system for further improvement in speed. For the cases where segmentation is possible with the existing standard method, our algorithm has a mean error of 5% for dry mass and 8% for volume measurements. Further morphological analysis, such as the determination of surface area, aspect ratio, circularity, and others, can be done by standard function files on the segmented 3D mask images generated by our algorithm. Therefore, this work can lead to wider adoption of QPI imaging for high-throughput analysis, which was earlier stymied by a lack of suitable cell segmentation tools, and lower the barrier to the adoption of QPI imaging modality in biological sciences. DisclosuresThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article. Code and Data AvailabilityThe code for analysis with a sample dataset can be found on the following link: https://github.com/Lconway4C/QPI-cell-segmentation.git. An additional dataset containing RI Tomogram TIFF files can be found on the following link: https://doi.org/10.6084/m9.figshare.23547087. AcknowledgmentsFigures 1 and 5 were partially created with Ref. 32. We would like to thank Professor Denis Wirtz and his students Haonan Xu and Bartholomew Starich for their help with macrophage cell culture. We acknowledge support from the Air Force Office of Scientific Research (FA9550-22-1-0334), National Institute of General Medical Sciences (1R35GM149272), and the National Cancer Institute (R01-CA238025). ReferencesY. Park, C. Depeursinge and G. Popescu,
“Quantitative phase imaging in biomedicine,”
Nat. Photonics, 12
(10), 578
–589 https://doi.org/10.1038/s41566-018-0253-x NPAHBY 1749-4885
(2018).
Google Scholar
G. Zheng, R. Horstmeyer and C. Yang,
“Wide-field, high-resolution Fourier ptychographic microscopy,”
Nat. Photonics, 7
(9), 739
–745 https://doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885
(2013).
Google Scholar
S. Jiang et al.,
“Resolution-enhanced parallel coded ptychography for high-throughput optical imaging,”
ACS Photonics, 8
(11), 3261
–3271 https://doi.org/10.1021/acsphotonics.1c01085
(2021).
Google Scholar
D. R. Stirling et al.,
“CellProfiler 4: improvements in speed, utility and usability,”
BMC Bioinformatics, 22
(1), 433 https://doi.org/10.1186/s12859-021-04344-9 BBMIC4 1471-2105
(2021).
Google Scholar
C. Stringer et al.,
“Cellpose: a generalist algorithm for cellular segmentation,”
Nat. Methods, 18
(1), 100
–106 https://doi.org/10.1038/s41592-020-01018-x 1548-7091
(2021).
Google Scholar
G. Popescu et al.,
“Imaging red blood cell dynamics by quantitative phase microscopy,”
Blood Cells Mol. Dis., 41
(1), 10
–16 https://doi.org/10.1016/j.bcmd.2008.01.010
(2008).
Google Scholar
P. Marquet, C. Depeursinge and P. J. Magistretti,
“Review of quantitative phase-digital holographic microscopy: promising novel imaging technique to resolve neuronal network activity and identify cellular biomarkers of psychiatric disorders,”
Neurophotonics, 1
(2), 020901 https://doi.org/10.1117/1.NPh.1.2.020901
(2014).
Google Scholar
N. A. Turko et al.,
“Detection and controlled depletion of cancer cells using photothermal phase microscopy,”
J. Biophotonics, 8
(9), 755
–763 https://doi.org/10.1002/jbio.201400095
(2015).
Google Scholar
K. Eder et al.,
“Morphological alterations in primary hepatocytes upon nanomaterial incubation assessed by digital holographic microscopy and holotomography,”
Proc. SPIE, 11970 119700H https://doi.org/10.1117/12.2610171 PSISDG 0277-786X
(2022).
Google Scholar
T. Srichana et al.,
“Flow cytometric analysis, confocal laser scanning microscopic, and holotomographic imaging demonstrate potentials of levofloxacin dry powder aerosols for TB treatment,”
J. Drug Deliv. Sci. Technol., 84 104464 https://doi.org/10.1016/j.jddst.2023.104464
(2023).
Google Scholar
M. E. Kandel et al.,
“Phase imaging with computational specificity (PICS) for measuring dry mass changes in sub-cellular compartments,”
Nat. Commun., 11
(1), 6256 https://doi.org/10.1038/s41467-020-20062-x NCAOBW 2041-1723
(2020).
Google Scholar
Y. R. He et al.,
“Cell cycle stage classification using phase imaging with computational specificity,”
ACS Photonics, 9
(4), 1264
–1273 https://doi.org/10.1021/acsphotonics.1c01779
(2022).
Google Scholar
C. Hu et al.,
“Live-dead assay on unlabeled cells using phase imaging with computational specificity,”
Nat. Commun., 13
(1), 713 https://doi.org/10.1038/s41467-022-28214-x NCAOBW 2041-1723
(2022).
Google Scholar
N. O. Loewke et al.,
“Automated cell segmentation for quantitative phase microscopy,”
IEEE Trans. Med. Imaging, 37
(4), 929
–940 https://doi.org/10.1109/TMI.2017.2775604 ITMID4 0278-0062
(2018).
Google Scholar
N. Otsu,
“A threshold selection method from gray-level histograms,”
IEEE Trans. Syst. Man Cybern., 9
(1), 62
–66 https://doi.org/10.1109/TSMC.1979.4310076
(1979).
Google Scholar
Y. Jo et al.,
“Quantitative phase imaging and artificial intelligence: a review,”
IEEE J. Sel. Top. Quantum Electron., 25
(1), 6800914 https://doi.org/10.1109/JSTQE.2018.2859234 IJSQEN 1077-260X
(2019).
Google Scholar
B. Ge et al.,
“Single-frame label-free cell tomography at speed of more than 10,000 volumes per second,”
(2022). Google Scholar
C. Zheng et al.,
“High spatial and temporal resolution synthetic aperture phase microscopy,”
Adv. Photonics, 2
(6), 065002 https://doi.org/10.1117/1.AP.2.6.065002 AOPAC7 1943-8206
(2020).
Google Scholar
S. Shin et al.,
“Active illumination using a digital micromirror device for quantitative phase imaging,”
Opt. Lett., 40
(22), 5407 https://doi.org/10.1364/OL.40.005407 OPLEDP 0146-9592
(2015).
Google Scholar
K. Kim et al.,
“Diffraction optical tomography using a quantitative phase imaging unit,”
Opt. Lett., 39
(24), 6935 https://doi.org/10.1364/OL.39.006935 OPLEDP 0146-9592
(2014).
Google Scholar
J. W. Wallis et al.,
“Three-dimensional display in nuclear medicine,”
IEEE Trans. Med. Imaging, 8
(4), 297
–230 https://doi.org/10.1109/42.41482 ITMID4 0278-0062
(1989).
Google Scholar
J. Choi et al.,
“Label-free three-dimensional analyses of live cells with deep-learning-based segmentation exploiting refractive index distributions,”
(2021).
Google Scholar
B. van Giffen, D. Herhausen and T. Fahse,
“Overcoming the pitfalls and perils of algorithms: a classification of machine learning biases and mitigation methods,”
J. Bus. Res., 144 93
–106 https://doi.org/10.1016/j.jbusres.2022.01.076 JBRED4
(2022).
Google Scholar
C. Rudin,
“Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,”
Nat. Mach. Intell., 1
(5), 206
–215 https://doi.org/10.1038/s42256-019-0048-x
(2019).
Google Scholar
X. Chen et al.,
“Artificial confocal microscopy for deep label-free imaging,”
Nat. Photonics, 17
(3), 250
–258 https://doi.org/10.1038/s41566-022-01140-6 NPAHBY 1749-4885
(2023).
Google Scholar
H. Rezatofighi et al.,
“Generalized intersection over union: a metric and a loss for bounding box regression,”
in IEEE/CVF Conf. Comput. Vision and Pattern Recognit.(CVPR),
658
–666
(2019). Google Scholar
H. Abu Alhaija et al.,
“Augmented reality meets computer vision: efficient data generation for urban driving scenes,”
Int. J. Comput. Vision, 126
(9), 961
–972 https://doi.org/10.1007/s11263-018-1070-x IJCVEQ 0920-5691
(2018).
Google Scholar
P. L. Jeune and A. Mokraoui,
“Rethinking intersection over union for small object detection in few-shot regime,”
(2023). Google Scholar
A. Carass et al.,
“Evaluating white matter lesion segmentations with refined Sørensen-dice analysis,”
Sci. Rep., 10
(1), 8242 https://doi.org/10.1038/s41598-020-64803-w SRCEC3 2045-2322
(2020).
Google Scholar
A. P. Zijdenbos et al.,
“Morphometric analysis of white matter lesions in MR images: method and validation,”
IEEE Trans. Med. Imaging, 13
(4), 716
–724 https://doi.org/10.1109/42.363096 ITMID4 0278-0062
(1994).
Google Scholar
P. Cignoni, C. Rocchini and R. Scopigno,
“Metro: measuring error on simplified surfaces,”
Computer Graphics Forum, 17
(2), 167
–174 https://doi.org/10.1111/1467-8659.00236 CGFODY 0167-7055
(1998).
Google Scholar
BioRender, “BioRender: scientific image and illustration software,”
https://www.bioRender.com
().
Google Scholar
|
Image segmentation
3D image processing
Image processing algorithms and systems
Evolutionary algorithms
Biological imaging
3D mask effects
Education and training