TY - JOUR
T1 - BraNet
T2 - a mobil application for breast image classification based on deep learning algorithms
AU - Jiménez Gaona, Yuliana
AU - Álvarez, María José Rodríguez
AU - Castillo Malla, Darwin
AU - García Jaen, Santiago
AU - Carrión Figueroa, Diana
AU - Corral Domínguez, Patricio
AU - Lakshminarayanan, Vasudevan
AU - Jiménez Gaona, Yuliana
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/9
Y1 - 2024/9
N2 - Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named “BraNet” for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client–server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader’s agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts’ accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model
AB - Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named “BraNet” for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client–server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader’s agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts’ accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model
KW - Breast cancer
KW - Deep learning
KW - Mammography
KW - Mobil app
KW - Ultrasound
KW - Breast cancer
KW - Deep learning
KW - Mammography
KW - Mobil app
KW - Ultrasound
UR - https://www.scopus.com/pages/publications/85192016315
UR - https://link.springer.com/journal/11517
U2 - 10.1007/s11517-024-03084-1
DO - 10.1007/s11517-024-03084-1
M3 - Artículo
AN - SCOPUS:85192016315
SN - 0140-0118
VL - 62
SP - 2737
EP - 2756
JO - Medical and Biological Engineering and Computing
JF - Medical and Biological Engineering and Computing
IS - 9
ER -