Impact of Concatenation of Digital Craniocaudal Mammography Images on a Deep-Learning Breast-Density Classifier Using Inception-V3 and ViT
- Resource Type
- Conference
- Authors
- Testagrose, Conrad; Gupta, Vikash; Erdal, Barbaros S.; White, Richard D.; Maxwell, Robert W.; Liu, Xudong; Kahanda, Indika; Elfayoumy, Sherif; Klostermeyer, William; Demirer, Mutlu
- Source
- 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) Bioinformatics and Biomedicine (BIBM), 2022 IEEE International Conference on. :3399-3406 Dec, 2022
- Subject
- Bioengineering
Computing and Processing
Signal Processing and Analysis
Deep learning
Measurement
Receivers
Transformers
Mammography
Breast cancer
Data systems
Breast Density
Breast Imaging
Radiology
Deep Learning
Vision Transformer
- Language
Breast density is an indicator of a patient’s predisposed risk of breast cancer. Although not fully understood, increased breast density increases the likelihood of developing breast cancer. Accurate assessment of breast density from mammogram images is a challenging task for the radiologist. A patient’s breast density is assigned to one of four categories outlined by Breast Imaging and Reporting Data Systems (BIRADS). There have been efforts to identify automated approaches to assist radiologists in the classification of a patient’s breast density. The interest in using deep learning to fulfill this need for an automated approach has seen a significant increase in recent years. The preprocessing techniques used to develop these deep learning approaches often have a profound impact on the model’s accuracy and clinical viability. In this paper, we outline a novel image preprocessing technique where we concatenate individual mammogram images and compare the results using this technique between Inception-v3 and a vision transformer (ViT). The results are compared using the area under (AUC) the receiver operator characteristics (ROC) curves and traditional accuracy metrics.