Abstract
Background
Liver cancer is still one of the most common causes of death from cancer globally. The accurate segmentation of liver tumors from CT images is critical for diagnosis, treatment planning, and tracking. Conventional segmentation techniques frequently struggle to handle the intricacy of medical images, requiring the usage of sophisticated artificial intelligence (AI) methods to enhance accuracy and effectiveness.
Objective
The main objective of this study is to create and test an improved U-Net model (AM-UNet) that incorporates an attention mechanism to enhance the segmentation and classification accuracy of liver tumors in CT images. This method seeks to surpass previous techniques in terms of accuracy, precision, recall, and F1 score.
Methods
The dataset used includes 194 liver tumor CT scans obtained from 131 individuals for training and 70 for testing. The open-source 3DIRCAD-B dataset, which is incorporated into LiTS, contains images of both normal and pathological conditions. Preprocessing methods such as Median Filtering (MF) and Histogram Equalization (HE) were used to reduce noise and improve contrast. The AM-UNet model was then used to segment the tumors before classifying them as malignant or benign. The efficiency was assessed utilizing metrics like accuracy, precision, recall, F1-score, and ROC (Receiver Operating Characteristic).
Results
The suggested AM-UNet model produced excellent outcomes, with a recall of 95%, accuracy of 92%, precision of 94%, and an F1-score of 93%. These metrics show that the model outperforms conventional techniques in correctly segmenting and classifying liver tumors in CT images.
Conclusion
The AM-UNet model improves the segmentation and classification of liver tumors, providing substantial performance metrics over traditional methods. Its utilization can transform liver cancer diagnosis by assisting physicians in accurate tumor identification and treatment planning, resulting in improved patient results.
Keywords
Get full access to this article
View all access options for this article.
