Abstract
Multimodal medical information fusion has emerged as a revolutionary method in intelligent healthcare, allowing complete consideration of patient well-being and tailored treatment strategies. On the other hand, the current approach produces erroneous findings and has problems with the early phases of brain tumour prediction in MRI images. In healthcare, accurate and reliable brain classification of images is essential for diagnosis and strategic decision-making. Currently, semantic gaps are the main problem with brain tumour image classification. To fill the research gap, traditional ML models for classification use handcrafted features, which are low-level yet relatively high-level, and they use intensive approaches for feature extraction and classification. In recent years, substantial improvements have been made in deep learning for automated image classification. Recurrent Neural Networks (RNNs), or deep Convolutional Neural Networks (CNN), have been particularly effective in this multimodal image classification. Hence, this paper presents the Multimodal Fusion Model-assisted Convolutional Neural Network and Recurrent Neural Networks (MFM-CNN-RNN) for automatic image classification in smart healthcare. This study aims to determine if a fusion of CT and MRI brain scans is normal or abnormal. To enhance the accuracy of brain tumour image classification, this method uses the multimodality information within CNNs and RNNs by extracting and fusing unique and complimentary features from different modalities. Within this framework, features have been retrieved using CNN features, while dependencies and classification have been determined using RNN attributes. Because of its design, LSTM excels in time series analysis, which involves processing data in sequential order.
Keywords
Get full access to this article
View all access options for this article.
