Presentation + Paper
3 April 2024 Developing an image-domain transformation technique for adapting deep learning algorithms: preliminary work using simulated tomosynthesis of breast patches
Author Affiliations +
Abstract
When image data changes due to routine imaging machine updates, the performance of previously-trained deep learning (DL)-algorithms can be degraded. To mitigate the potential for performance degradation, we introduced an image-domain transfer approach by using a conditional Generative Adversarial Network (CGAN) that can transform the images acquired after the update to match previous images. Some studies have proposed domain adaptation (DA) method for DL algorithms from the old system to the new system. However, we sought to investigate the suitability of the DA method for transferring images from the current system to that of the previous system to adapt existing DL-algorithm in the current system. We validated our domain transfer approach using 1,000 DBT patch volumes (500 lesions; 500 normal) with two distinct image qualities under a virtual clinical trial framework. We curated two DBT image sets of breast patches with two distinct image qualities using different image reconstruction settings (simulating two different systems; e.g., previous and current). We then divided our data into training, validation, and testing set with a ratio of 0.8:0.1:0.1. Using the training set, we developed two CGANs (normal and lesion) for image-domain transformation from current to previous systems. We fine-tuned a DenseNet121 network as a reference classifier for classifying lesions vs. normal DBT patch volumes using the training set from the previous system. We evaluated our domain-transfer method by testing the reference model on the test sets in three qualities: a) previous (SP), b) current (SC), and c) domain-transferred images (DTC2P). The performance of the reference model with an AUC of 1.0 on the previous images, was degraded to an AUC of 0.88 (SC vs SP: p < 0.005) on the current images, but restored its performance to an AUC of 0.97 on the DT-images (SC vs DTC2P: p < 0.005). This result demonstrated that our domain transfer approach effectively restores the reference model’s original performance on images with current quality.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Md Belayat Hossain, Bruno Barufaldi, Andrew D. A. Maidment, Robert M. Nishikawa, and Juhun Lee "Developing an image-domain transformation technique for adapting deep learning algorithms: preliminary work using simulated tomosynthesis of breast patches", Proc. SPIE 12927, Medical Imaging 2024: Computer-Aided Diagnosis, 129271O (3 April 2024); https://doi.org/10.1117/12.3007467
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Digital breast tomosynthesis

Image quality

Breast

Artificial intelligence

Algorithm development

Deep learning

Imaging systems

Back to Top