Open Access
18 April 2024 CellSNAP: a fast, accurate algorithm for 3D cell segmentation in quantitative phase imaging
Piyush Raj, Santosh Kumar Paidi, Lauren Conway, Arnab Chatterjee, Ishan Barman
Author Affiliations +
Abstract

Significance

Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms.

Aim

We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images.

Approach

The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x-y plane. Leveraging cell continuity in consecutive z-stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process.

Results

The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools.

Conclusion

Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.

1.

Introduction

Regulation and coordination of cell shapes are central to native physiology during all stages of organismal existence. Recent advances in optical imaging have provided mechanistic insights into such phenomena by revealing cellular features and processes with previously unimagined detail.1,2,3 Central to the accurate analysis of such intricate biological processes is the precise segmentation of cellular images. Quantifying cellular morphology, such as shape, area, circularity, aspect ratio, and others, starts with first segmenting the cells in a given field of view. Due to its unquestionable significance, much work has been done to standardize the process. There are well-developed open-source software suites, notably CellProfiler4 and CellPose,5 to perform such segmentation tasks with great accuracy. The recent update to CellProfiler includes the functionality of three-dimensional (3D) image segmentation and is currently the most widely used tool to perform such tasks. However, since the current workhorse for biological imaging is fluorescence microscopy, all standard segmentation software are optimized for and targeted to the analysis of fluorescence images.

Yet, contrast-agent-free microscopy is highly desirable to study the dynamics and physiological activity of various structures in living cells. Quantitative phase imaging (QPI) measures optical field images using laser-based interferometry and has rapidly emerged as a viable imaging alternative because it offers an objective measure of morphology and dynamics in a label-free manner.1 In addition to the amplitude images provided by conventional intensity-based microscopy techniques, QPI measures optical phase delay maps governed by the refractive index (RI) distribution of a sample. Since the endogenous RI distribution is strongly related to the structural and biochemical characteristics of the cell type, the acquired field images can be analyzed for systematic discovery of cell type-specific morphological and biophysical fingerprints encoded in the images.

Over the past two decades, QPI has provided important insights into diverse biological phenomena, ranging from membrane dynamics of red blood cells6 to neuronal activity7 and cell-nanoparticle interactions,8,9 and cell-drug interactions.10 Recently, it has also been shown that QPI images can be mapped to fluorescence images using deep learning techniques, a concept coined as image-to-image translation. Prediction of stains (i.e., where specific fluorophores/stains would bind in an unlabeled specimen) using a combination of QPI and machine learning has been successfully demonstrated,1113 and gradually, more stains are being added to the library. Indeed, phase imaging with computational specificity has allowed precise measurements of the growth of nuclei and cytoplasm independently, over many days, without loss of viability.

Central to many of the aforementioned and other emergent applications is QPI’s intrinsic ability to measure single-cell volume and mass non-destructively and ultra-sensitively over arbitrary periods of time in both adherent and flowing cell populations.1 A critical step in undertaking such analysis is the accurate segmentation of the tomographic images of the cell populations. Since QPI imaging is still a relatively new technique in the field of cell biology, the analysis pipeline is not as developed as it is for fluorescence images. The toolbox developed for fluorescence image segmentation does not work well with QPI images, as the fluorescence contrasts are much sharper than the RI contrast. Also, in some segmentation procedures, the stained nucleus is used as a fiducial marker to define the respective cytoplasm boundary, and consequently, such algorithms cannot be directly implemented in QPI imaging. This has motivated researchers to develop segmentation algorithms tailored for QPI images,14 yet their applicability has been limited to two-dimensional images thus far.

The state-of-the-art method used for 3D QPI cell segmentation is an Otsu-based 3D watershed algorithm15 (hereafter referred to as the Otsu threshold algorithm in this work). This algorithm works very well for isolated cell images; however, it is challenging to draw boundaries when the cells are clumped. This process is also memory intensive since the processing requires computation on a 3D stack of images. As a point of reference, using the current state-of-the-art software module, this cell segmentation takes ca. 10 seconds for a 3D stack of images (484×484×208) on a workstation equipped with an 8th generation i7 processor running at 3.7 GHz, 64 GB RAM, and Nvidia GeForce GTX 1080 - 8 GB graphics card. Another recently developed method for 3D QPI cell segmentation is an AI-based cell segmentation tool.16 While this should, in principle, be able to overcome the limitations of the Otsu threshold algorithm, such implementation would require an extensive training dataset with enough complexity to handle all the edge cases. The preparation of such training datasets would also require manual annotations, besides needing separate datasets for different cell types. While an AI-trained system can be implemented on a modest system with a GPU, the training for an AI-based segmentation tool would necessitate a state-of-the-art computational resource.

To overcome these drawbacks, we report here a fast and light-weight algorithm for 3D QPI image-based cell segmentation. Our cell segmentation algorithm is inspired by the process of gemstone extraction, where a stonemason starts by making a coarse extrusion cut along a given axis and then gradually carves away near the surface of the gemstone to reveal the final form [Fig. 1(a)]. The algorithm first extrudes a 3D image from a two-dimensional (2D) segmented mask, which can be seen as creating a rough shape of the cell structure. A 2D image is generated through maximum intensity projection (MIP) of the 3D image, and a 2D cell segmentation algorithm is used to make a 2D mask. This allows us to identify the boundary region for segmentation in the x-y plane. Next, we take advantage of the continuity of cells and the absence of abrupt changes in consecutive z-stacks of the 3D image to perform the complete 3D segmentation. This step can be likened to the chiseling process in gemstone extraction, where the stone mason uses finer tools to carve away at the surface. The implication for developing such a method is vast as it can speed up high-throughput single-cell analysis and reduce the computational burden on the hardware. This also lowers the barrier to the adoption of QPI in biological sciences. The growing advancements in hardware development for high-throughput imaging in QPI17,18 also offer significant utility to this method.

2.

Materials and Methods

2.1.

Cell Culture Method

U937 cells were seeded in TomoDishes at about 5*104  cells per dish with complete media and PMA (Sigma-Aldrich 79346, 1  μM). The complete media consisted of RPMI-1640 (Gibco, 11875-093), 10% heat-inactivated FBS (Corning, 35-010-CV), 1% P/S, 1%L-Glu (Gibco, 200 mM, 25030081), and 1% HEPES (Gibco, 1M, 15630-080). The seeding day is counted as day 0.

For M1 polarized cells, on day 3, PMA media is removed, and cells are washed with PBS. Thereafter, polarizing media consisting of complete media with LPS (List Biological Laboratories, 421, 100  ng/mL) and IFN-y (Sigma-Aldrich, SRP3093, 50  ng/mL) were added to the dish. The cells were incubated for another 2 days. On day 5, the media was removed, and cells were washed with PBS before fixing.

For M2 polarized cells, on day 3, PMA media is removed, and cells are washed with PBS. Thereafter, polarizing media consisting of complete media with IL-4 (Sigma-Aldrich, SRP3093, 50  ng/mL) and IL-13 (R&D Systems, 213-ILB-005, 50  ng/mL) were added to the dish. The cells were incubated for another 2 days. On day 5, the media was removed, and cells were washed with PBS before fixing.

The cells were fixed with 4% PFA at room temperature for 15 min. It was then washed with PBS two times. PBS was once again added to fully submerge the fixed cells before imaging.

2.2.

Imaging Method

The measurements were performed on a QPI system (HT-1H, Tomocube Inc., Republic of Korea) comprised of 60× water immersion objective (1.2 NA), an off-axis Mach–Zender interferometer with a 532 nm laser, and a digital micromirror device for tomographic scanning of each cell.19 The 3D RI distribution of the cells was reconstructed from the interferograms using the Fourier diffraction theorem as described previously.20 TomoStudio (Tomocube Inc., Republic of Korea) was used to reconstruct and visualize 3D RI maps and their 2D MIP.

2.3.

Design Principle and Workflow

  • Step 1: The first stage involves gathering and exporting all unprocessed 3D image stacks from the imaging software. Our code is tailored to operate with a tiff stack file, which contains matrix values represented by RI, saved in uint16 format. Nevertheless, the algorithm is compatible with any file format provided that we are utilizing a scalar multiplication of the RI value as the matrix value.

  • Step 2: We produce an MIP image of the 3D tiff image stack along the z-axis. MIP is a technique used to visualize 3D data along a visualization axis, where only the voxels with maximum intensity are projected onto the image. Originally developed for use in Nuclear Medicine by Jerold Wallis, it has since been applied to various tomographic imaging modalities, such as CT scans and X-ray imaging.21 Most 3D imaging software offers the option to export 2D MIP images. A standard library can also be utilized for this image-processing step.

  • Step 3: In this step, we take advantage of already well-built 2D image analysis libraries, which works wonderfully on 2D images for cell segmentation. Specifically, we use CellProfiler1 to perform 2D cell segmentation on MIP images. The image mask files were saved in a separate folder. While CellProfiler was our tool of choice because of its ease of usage and flexibility, any 2D segmentation tool can be used as long as they have the ability to generate mask images.

  • Step 4: The 2D mask generated from CellProfiler is then extruded to 3D, where the number of image stacks for the mask is equal to the number of the z-stack planes in the original RI image.

  • Step 5: The process begins by selecting the mask for each cell within the field of view individually and setting the matrix value for that mask to 1, and the background and the mask of all other cells are set to 0. The resulting matrix is then multiplied by the original RI image. This process is repeated iteratively for all cells in the given field of view. In addition, all pixels with RI values equal to or lower than the background RI value are set to zero. The resulting matrix, referred to as the rough_segmentation_matrix, shows a rough outline of the cell with noisy pixels around it, as shown in Fig. 1.

  • Step 6: In this step, we use the continuity feature of the 3D cell image and implement 2D and 3D connectivity matrices to eliminate noisy pixels around the cell. First, we take the rough_segmentation_matrix and plot the number of non-zero pixels (normalized) against the number of z slices. We set a hard threshold at the z-slice where the normalized non-zero-pixel value is 0.5. Instead of hard-coding the z-stack number to identify the bottom of the cell, we used a threshold condition because our microscope system has an axial resolution of 300 nm, which is lower than the surface roughness of commonly used petri dishes. Next, for each 2D plane, we label the connected components using the 2D connectivity matrix algorithm, which is achieved using the bwlabel function in MATLAB. We only retain the label corresponding to t highest number of pixels in each 2D slice plane to remove extra noise from cell debris [Fig. 2(a)]. After 2D noise removal, we use the bwlabeln function on the 3D image to eliminate all the noise in the 3D structure, removing all disconnected structures in the process, resulting in perfectly segmented cells [Fig. 2(b)].

  • Step 7: The dry mass is calculated using the following equation:

    drymass=1αsOPD(x,y)dxdy,  whereOPD(x,y)=0h[n(x,y)nmedium]dz  and,  α=0.19  μm3pg1

Volume is calculated by counting the number of non-zero pixels in the 3D segmentation mask.

Fig. 1

Our method draws inspiration from gemstone extraction from stone, where a stone block resembles a 3D raw image matrix. Coarse segmentation along the z-plane corresponds to step 4 in (b), whereas fine segmentation aligns with steps 5 and 6 in (b). (b) Step by step workflow for cell segmentation using QPI images.

JBO_29_S2_S22706_f001.png

Fig. 2

(a) The segmentation process involves utilizing a 2D connectivity matrix to label segments in each 2D slice, followed by the selection of the segment with the highest pixel count. Small segments, depicted in different colors within each slice, are subsequently discarded. (b) Analogously, the 3D segmentation process involves labeling distinct segments using a 3D connectivity matrix and selecting the segment with the highest voxel count.

JBO_29_S2_S22706_f002.png

3.

Results and Discussion

We tested our QPI cell segmentation algorithm against the current gold standard of the Otsu-based 3D watershed algorithm provided by the manufacturer with their microscopy software suite. We also used the recent AI-powered segmentation tool22 for cases where the watershed algorithm did not provide satisfactory segmentation. The AI-powered segmentation tool is currently available for a limited number of cell types, including macrophages. Our comparative study, therefore, is with U937-derived macrophages M1 and M2 cells. Since label-free volume and dry mass measurements are unique to QPI images, these two metrics act as suitable benchmarking metrics to compare the performance of our algorithm. Cell experiments were conducted in sparsely populated cell populations in a petri dish, with a confluency of ∼65% [Fig. 5(a)]. This enabled us to obtain high-quality images featuring a single cell within a given field of view. As a result, we could compare our algorithm to the current gold standard with minimal concern for its accuracy, as the latter is plagued by robustness issues in the presence of multiple cells within a given field of view. We observed a dry mass difference of ∼5% for both M1 and M2 cells [Figs. 5(b) and 5(c)], whereas the difference in volume was slightly higher, with a mean value of 8% for M1 and 6% for M2 cells [Figs. 5(d) and 5(e)]. Since the methods used to perform cell segmentation differed significantly, it is encouraging to note that the mean differences in dry mass and volume were both <8% [Figs. 5(f) and 5(g)]. In addition, we have included a table (Table 1) that compares the computation time required by each of these algorithms, demonstrating that our algorithm significantly outperforms the current gold standard even on a platform with modest computational hardware.

To put our study in the context of cell segmentation literature, we performed a comparison of our method against the absolute gold standard of manual segmentation. Each 3D cell image in our analysis consisted of a high-resolution stack comprising 208 slices. To establish a ground truth dataset, we manually created annotated masks for 10 cells from M1 (2080 images) and 10 cells from M2 (2080 images). Subsequently, we conducted a segmentation accuracy comparison using our CellSNAP algorithm. To ensure thoroughness, we also assessed performance against Cellpose, a currently popular human-in-the-loop-based deep learning tool for cell segmentation. The segmentation accuracy was determined by calculating the ratio of correctly classified pixels to the total number of pixels inside the manually segmented cells. We notice that CellSNAP has a consistent accuracy of above 90% for most of the cells. One likely explanation for the high accuracy observed is that human annotators base their decision to include a particular area as part of the cell on its continuous presence. In cases where certain regions appear fragmented, they are typically classified as cell debris. In the CellSNAP algorithm, we employ a connectivity matrix to facilitate precise segmentation by capturing the connectivity patterns within the cell. The performance of the Cellpose algorithm varies, succeeding in certain cases while exhibiting significant failures in others. This discrepancy arises from the fact that the network has not been specifically trained on our dataset, and we are utilizing the default weights. Although retraining the Cellpose network tailored to our requirements is certain to enhance its performance, this approach proves time-consuming, as each cell image necessitates manual annotation of 208 image slices. In the Supplementary Material, we have provided a comparison based on other metrics, such as intersection over union (Fig. S1 in the Supplementary Material), Dice coefficient (Fig. S2 in the Supplementary Material), and max Hausdorff distance (Fig. S3 in the Supplementary Material) for the cells shown in Fig. 3.

Fig. 3

Segmentation accuracy comparison against the manual annotated and segmented data: (a) M1 cells and (b) M2 cells for Cellpose and CellSNAP algorithm.

JBO_29_S2_S22706_f003.png

We embarked on developing a cell segmentation algorithm due to the limitations of the current method in segmenting cells when they are clumped together within a field of view. Moreover, the current method’s performance is hampered in suboptimal imaging conditions. Given that QPI imaging is widely used for longitudinal studies, temperature drifts, stage drifts, and calibration drifts are inevitable, rendering the current segmentation algorithms unsuitable for such scenarios, with resulting imaging data being either unusable or requiring extensive manual processing. In this regard, we present two instances where our algorithm’s robustness is evident. In cases where multiple cells are clumped together and share boundaries, the existing Otsu thresholding fails to distinguish between the cells, resulting in a single unit being segmented [Fig. 4(a)]. Although the AI segmentation tool offers some promise, as evidenced by the accurate segmentation of one cell in Fig. 4(a), it fails to accurately segment the remaining cells. By contrast, our segmentation algorithm accurately segments the cells and outperforms existing methods in terms of computational speed. The segmentation accuracy for each image slice can be viewed in Video 1.

Fig. 4

The efficacy of our algorithm in handling challenging scenarios encountered during QPI imaging. (a) Multiple cells in proximity pose a segmentation challenge for conventional methods, such as Otsu threshold and AI-based segmentation. However, our algorithm can accurately segment individual cells even in such a clumped condition. (b) In suboptimal imaging conditions, interferogram noise impedes accurate segmentation by traditional techniques. In contrast, our algorithm is resilient to such noise and can accurately segment cells (Video 1, MP4, 17.6 MB [URL: https://doi.org/10.1117/1.JBO.29.S2.S22706.s1]).

JBO_29_S2_S22706_f004.png

Fig. 5

(a) Cell experiments with sparsely populated cells in the petridish and a sample cell image obtained using the QPI microscope. (b) Comparison of dry mass values for M1 cells. (c) Comparison of dry mass values for M2 cells. (d) Comparison of volume for M1 cells. (e) Comparison of volume for M2 cells. For a statistical comparison between the two algorithms, the difference in the percentage of dry mass and volume was calculated for (f) M1 and (g) M2 cells with the current gold standard and our proposed algorithm.

JBO_29_S2_S22706_f005.png

Table 1

Speed comparison for different methods.

Method nameComputation timeTotal timeSystem configuration
Watershed algorithmFile loading: 3 s10 si7 - 8700K CPU @3.7 GHz, 64 GB RAM, Nvidia GeForce GTX 1080 - 8 GB graphics card
Processing time: 7 s
AI-enabled segmentationRunning time after training: 6 s6 si7 - 8700K CPU @3.7 GHz, 64 GB RAM, Nvidia GeForce GTX 1080 - 8 GB graphics card
Cellpose350 si5 -7200 @ 2.7 GHz, 16 GB RAM
CellSNAP2D segmentation: 1 s2 s per cell utilizing a single-core processor. It can be sped up by parallel processorsi5 -7200 @ 2.7 GHz, 16 GB RAM
Remaining process: 1 s

In another example, the image captured has interferogram patterns on the image. This typically arises from calibration drift when performing time-lapse over extended periods. Removing this error with just an Otsu threshold proves challenging, given the similarity between pixel values in the interferogram shadows and the cell. A pattern recognition conditional statement may be used to remove this error along with the Otsu threshold, but we are not aware of any implementation of such an algorithm. While AI segmentation tools can be trained with datasets featuring interferogram artifacts, such implementations are computationally intensive and demanding. Moreover, such errors are prevalent for tools based on trained models when presented with unique cases.23 Introducing enough variations in the training dataset and retraining the algorithm can overcome this, but this process has limitations, as the machine learning algorithm seldom provides details on the methods.24 Debugging is also difficult if the trained model influences other cases. Nevertheless, CellSNAP offers a possibility of segmenting such cell images as well [Fig. 4(b)].

While promising, CellSNAP’s straightforward approach also brings about certain limitations. One notable limitation is its inability to segment multiple cells along the z-axis, as the algorithm assumes the presence of only one cell in the z-stack. This restriction renders the algorithm unsuitable for scenarios where multiple cells are clustered along the z-axis, a situation commonly found in spheroids and organoids. To the best of our knowledge, the only way to perform 3D segmentation in spheroids or organoids with quantitative phase images is available via deep learning25 where training dataset comprised of paired QPI and fluorescence images. The QPI images were then translated to fluorescence images through a U-Net architecture, which is then used to segment cells, skipping the need for manual segmentation of QPI images. Another limitation arises in cases of high clustering density, where the algorithm struggles to segment cells due to its assumption of a single cell along the extruded MIP volume. In instances of high-density clustering with interleaved cell structures along the z-stack, the segmentation becomes challenging. The third scenario where the algorithm faces challenges is when the continuity of dry components of cells is disrupted. Certain cell types, during differentiation or specific functional states, may experience water uptake, resulting in a significant increase in volume. In QPI images, the dark appearance of the water background leads to a discontinuous distribution of dry components, causing CellSNAP to fail in appropriately segmenting cells in such cases.

4.

Conclusion

QPI is a powerful imaging tool that is making major strides and has found numerous applications in basic biological studies and applied clinical research. As QPI utilizes the optical path length as intrinsic contrast, the imaging is noninvasive and, thereby, allows for monitoring live cell samples over several days without concerns of degraded viability. Therefore, significant recent attention has been focused on developing robust analysis pipelines for quantitative phase images, including the application of convolutional neural networks for computationally substituting chemical stains for cells, extracting biomarkers of interest, and enhancing imaging quality. Yet, 3D segmentation of cells, particularly clumped cells, presents a major challenge, as the existing methods work well only for isolated cells. In this work, we have shown that our cell segmentation algorithm for QPI images outperforms the existing gold standard both in terms of speed and robustness. Our algorithm takes about 2 s per cell on a single-core processor to perform the segmentation. This can easily be parallelized on a multi-core system for further improvement in speed. For the cases where segmentation is possible with the existing standard method, our algorithm has a mean error of 5% for dry mass and 8% for volume measurements. Further morphological analysis, such as the determination of surface area, aspect ratio, circularity, and others, can be done by standard function files on the segmented 3D mask images generated by our algorithm. Therefore, this work can lead to wider adoption of QPI imaging for high-throughput analysis, which was earlier stymied by a lack of suitable cell segmentation tools, and lower the barrier to the adoption of QPI imaging modality in biological sciences.

Disclosures

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.

Code and Data Availability

The code for analysis with a sample dataset can be found on the following link: https://github.com/Lconway4C/QPI-cell-segmentation.git.

An additional dataset containing RI Tomogram TIFF files can be found on the following link: https://doi.org/10.6084/m9.figshare.23547087.

Acknowledgments

Figures 1 and 5 were partially created with Ref. 32. We would like to thank Professor Denis Wirtz and his students Haonan Xu and Bartholomew Starich for their help with macrophage cell culture. We acknowledge support from the Air Force Office of Scientific Research (FA9550-22-1-0334), National Institute of General Medical Sciences (1R35GM149272), and the National Cancer Institute (R01-CA238025).

References

1. 

Y. Park, C. Depeursinge and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics, 12 (10), 578 –589 https://doi.org/10.1038/s41566-018-0253-x NPAHBY 1749-4885 (2018). Google Scholar

2. 

G. Zheng, R. Horstmeyer and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics, 7 (9), 739 –745 https://doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885 (2013). Google Scholar

3. 

S. Jiang et al., “Resolution-enhanced parallel coded ptychography for high-throughput optical imaging,” ACS Photonics, 8 (11), 3261 –3271 https://doi.org/10.1021/acsphotonics.1c01085 (2021). Google Scholar

4. 

D. R. Stirling et al., “CellProfiler 4: improvements in speed, utility and usability,” BMC Bioinformatics, 22 (1), 433 https://doi.org/10.1186/s12859-021-04344-9 BBMIC4 1471-2105 (2021). Google Scholar

5. 

C. Stringer et al., “Cellpose: a generalist algorithm for cellular segmentation,” Nat. Methods, 18 (1), 100 –106 https://doi.org/10.1038/s41592-020-01018-x 1548-7091 (2021). Google Scholar

6. 

G. Popescu et al., “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells Mol. Dis., 41 (1), 10 –16 https://doi.org/10.1016/j.bcmd.2008.01.010 (2008). Google Scholar

7. 

P. Marquet, C. Depeursinge and P. J. Magistretti, “Review of quantitative phase-digital holographic microscopy: promising novel imaging technique to resolve neuronal network activity and identify cellular biomarkers of psychiatric disorders,” Neurophotonics, 1 (2), 020901 https://doi.org/10.1117/1.NPh.1.2.020901 (2014). Google Scholar

8. 

N. A. Turko et al., “Detection and controlled depletion of cancer cells using photothermal phase microscopy,” J. Biophotonics, 8 (9), 755 –763 https://doi.org/10.1002/jbio.201400095 (2015). Google Scholar

9. 

K. Eder et al., “Morphological alterations in primary hepatocytes upon nanomaterial incubation assessed by digital holographic microscopy and holotomography,” Proc. SPIE, 11970 119700H https://doi.org/10.1117/12.2610171 PSISDG 0277-786X (2022). Google Scholar

10. 

T. Srichana et al., “Flow cytometric analysis, confocal laser scanning microscopic, and holotomographic imaging demonstrate potentials of levofloxacin dry powder aerosols for TB treatment,” J. Drug Deliv. Sci. Technol., 84 104464 https://doi.org/10.1016/j.jddst.2023.104464 (2023). Google Scholar

11. 

M. E. Kandel et al., “Phase imaging with computational specificity (PICS) for measuring dry mass changes in sub-cellular compartments,” Nat. Commun., 11 (1), 6256 https://doi.org/10.1038/s41467-020-20062-x NCAOBW 2041-1723 (2020). Google Scholar

12. 

Y. R. He et al., “Cell cycle stage classification using phase imaging with computational specificity,” ACS Photonics, 9 (4), 1264 –1273 https://doi.org/10.1021/acsphotonics.1c01779 (2022). Google Scholar

13. 

C. Hu et al., “Live-dead assay on unlabeled cells using phase imaging with computational specificity,” Nat. Commun., 13 (1), 713 https://doi.org/10.1038/s41467-022-28214-x NCAOBW 2041-1723 (2022). Google Scholar

14. 

N. O. Loewke et al., “Automated cell segmentation for quantitative phase microscopy,” IEEE Trans. Med. Imaging, 37 (4), 929 –940 https://doi.org/10.1109/TMI.2017.2775604 ITMID4 0278-0062 (2018). Google Scholar

15. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 https://doi.org/10.1109/TSMC.1979.4310076 (1979). Google Scholar

16. 

Y. Jo et al., “Quantitative phase imaging and artificial intelligence: a review,” IEEE J. Sel. Top. Quantum Electron., 25 (1), 6800914 https://doi.org/10.1109/JSTQE.2018.2859234 IJSQEN 1077-260X (2019). Google Scholar

17. 

B. Ge et al., “Single-frame label-free cell tomography at speed of more than 10,000 volumes per second,” (2022). Google Scholar

18. 

C. Zheng et al., “High spatial and temporal resolution synthetic aperture phase microscopy,” Adv. Photonics, 2 (6), 065002 https://doi.org/10.1117/1.AP.2.6.065002 AOPAC7 1943-8206 (2020). Google Scholar

19. 

S. Shin et al., “Active illumination using a digital micromirror device for quantitative phase imaging,” Opt. Lett., 40 (22), 5407 https://doi.org/10.1364/OL.40.005407 OPLEDP 0146-9592 (2015). Google Scholar

20. 

K. Kim et al., “Diffraction optical tomography using a quantitative phase imaging unit,” Opt. Lett., 39 (24), 6935 https://doi.org/10.1364/OL.39.006935 OPLEDP 0146-9592 (2014). Google Scholar

21. 

J. W. Wallis et al., “Three-dimensional display in nuclear medicine,” IEEE Trans. Med. Imaging, 8 (4), 297 –230 https://doi.org/10.1109/42.41482 ITMID4 0278-0062 (1989). Google Scholar

22. 

J. Choi et al., “Label-free three-dimensional analyses of live cells with deep-learning-based segmentation exploiting refractive index distributions,” (2021). Google Scholar

23. 

B. van Giffen, D. Herhausen and T. Fahse, “Overcoming the pitfalls and perils of algorithms: a classification of machine learning biases and mitigation methods,” J. Bus. Res., 144 93 –106 https://doi.org/10.1016/j.jbusres.2022.01.076 JBRED4 (2022). Google Scholar

24. 

C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat. Mach. Intell., 1 (5), 206 –215 https://doi.org/10.1038/s42256-019-0048-x (2019). Google Scholar

25. 

X. Chen et al., “Artificial confocal microscopy for deep label-free imaging,” Nat. Photonics, 17 (3), 250 –258 https://doi.org/10.1038/s41566-022-01140-6 NPAHBY 1749-4885 (2023). Google Scholar

26. 

H. Rezatofighi et al., “Generalized intersection over union: a metric and a loss for bounding box regression,” in IEEE/CVF Conf. Comput. Vision and Pattern Recognit.(CVPR), 658 –666 (2019). Google Scholar

27. 

H. Abu Alhaija et al., “Augmented reality meets computer vision: efficient data generation for urban driving scenes,” Int. J. Comput. Vision, 126 (9), 961 –972 https://doi.org/10.1007/s11263-018-1070-x IJCVEQ 0920-5691 (2018). Google Scholar

28. 

P. L. Jeune and A. Mokraoui, “Rethinking intersection over union for small object detection in few-shot regime,” (2023). Google Scholar

29. 

A. Carass et al., “Evaluating white matter lesion segmentations with refined Sørensen-dice analysis,” Sci. Rep., 10 (1), 8242 https://doi.org/10.1038/s41598-020-64803-w SRCEC3 2045-2322 (2020). Google Scholar

30. 

A. P. Zijdenbos et al., “Morphometric analysis of white matter lesions in MR images: method and validation,” IEEE Trans. Med. Imaging, 13 (4), 716 –724 https://doi.org/10.1109/42.363096 ITMID4 0278-0062 (1994). Google Scholar

31. 

P. Cignoni, C. Rocchini and R. Scopigno, “Metro: measuring error on simplified surfaces,” Computer Graphics Forum, 17 (2), 167 –174 https://doi.org/10.1111/1467-8659.00236 CGFODY 0167-7055 (1998). Google Scholar

32. 

BioRender, “BioRender: scientific image and illustration software,” https://www.bioRender.com (). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Piyush Raj, Santosh Kumar Paidi, Lauren Conway, Arnab Chatterjee, and Ishan Barman "CellSNAP: a fast, accurate algorithm for 3D cell segmentation in quantitative phase imaging," Journal of Biomedical Optics 29(S2), S22706 (18 April 2024). https://doi.org/10.1117/1.JBO.29.S2.S22706
Received: 9 November 2023; Accepted: 28 March 2024; Published: 18 April 2024
Advertisement
Advertisement
KEYWORDS
Image segmentation

3D image processing

Image processing algorithms and systems

Evolutionary algorithms

Biological imaging

3D mask effects

Education and training

Back to Top