COVID-19 Detection through Transfer Learning Using Multimodal Imaging Data

[thumbnail of Unconfirmed 26797.crdownload]
Unconfirmed 26797.crdownload - Published Version (1MB)
Available under license: Creative Commons Attribution

Horry, Michael J ORCID: 0000-0001-6691-9739, Chakraborty, Subrata ORCID: 0000-0002-0102-5424, Paul, Manoranjan ORCID: 0000-0001-6870-5056, Ulhaq, Anwaar ORCID: 0000-0002-5145-7276, Pradhan, Biswajeet ORCID: 0000-0001-9863-2054, Saha, Manas ORCID: 0000-0003-1677-9867 and Shukla, Nagesh ORCID: 0000-0002-8421-3972 (2020) COVID-19 Detection through Transfer Learning Using Multimodal Imaging Data. IEEE Access, 8. ISSN 2169-3536

Abstract

Detecting COVID-19 early may help in devising an appropriate treatment plan and disease containment decisions. In this study, we demonstrate how transfer learning from deep learning models can be used to perform COVID-19 detection using images from three most commonly used medical imaging modes X-Ray, Ultrasound, and CT scan. The aim is to provide over-stressed medical professionals a second pair of eyes through intelligent deep learning image classification models. We identify a suitable Convolutional Neural Network (CNN) model through initial comparative study of several popular CNN models. We then optimize the selected VGG19 model for the image modalities to show how the models can be used for the highly scarce and challenging COVID-19 datasets. We highlight the challenges (including dataset size and quality) in utilizing current publicly available COVID-19 datasets for developing useful deep learning models and how it adversely impacts the trainability of complex models. We also propose an image pre-processing stage to create a trustworthy image dataset for developing and testing the deep learning models. The new approach is aimed to reduce unwanted noise from the images so that deep learning models can focus on detecting diseases with specific features from them. Our results indicate that Ultrasound images provide superior detection accuracy compared to X-Ray and CT scans. The experimental results highlight that with limited data, most of the deeper networks struggle to train well and provides less consistency over the three imaging modes we are using. The selected VGG19 model, which is then extensively tuned with appropriate parameters, performs in considerable levels of COVID-19 detection against pneumonia or normal for all three lung image modes with the precision of up to 86% for X-Ray, 100% for Ultrasound and 84% for CT scans.

Dimensions Badge

Altmetric Badge

Item type Article
URI https://vuir.vu.edu.au/id/eprint/42557
DOI 10.1109/ACCESS.2020.3016780
Official URL https://ieeexplore.ieee.org/document/9167243
Subjects Current > FOR (2020) Classification > 4603 Computer vision and multimedia computation
Current > Division/Research > College of Science and Engineering
Keywords Covid-19, Corona Virus, transfer learning, multimodal imaging, lung, health
Citations in Scopus 274 - View on Scopus
Download/View statistics View download statistics for this item

Search Google Scholar

Repository staff login