Abstract
This paper presents a liver image segmentation method based on the EABRDeNet model, which incorporates the architectural concept of U-Net. In the encoding stage, EfficientNet-B0 serves as the backbone network, combined with the SEBlock mechanism and residual blocks to enhance the extraction of key liver features and address gradient-related issues. In the decoding stage, deconvolution and upsampling are used for precise segmentation. Liver images from open-source websites are pre-processed and data-augmented to construct the G dataset. The EABRDeNet model is trained on this dataset, and segmentation results are obtained through the weight-sharing mechanism. To verify the model’s effectiveness, comparative experiments are conducted on the G dataset with U-Net and U-Net + Dice_Focal_Loss. Metrics such as accuracy, loss, and Dice coefficient are used for evaluation. The experimental results show that the EABRDeNet model outperforms the other two models. Specifically, on the training set, the EABRDeNet model has an average loss of 0.0025979, an average accuracy of 0.9876249, and an average Dice coefficient of 0.985658. On the test set, the average loss is 0.0029967, the average accuracy is 0.987649, and the average Dice coefficient is 0.9845206. In contrast, the U-Net model and U-Net + Dice_Focal_Loss model have relatively higher losses and lower accuracies and Dice coefficients, indicating that the EABRDeNet model has better performance and stability in liver image segmentation tasks.
Get full access to this article
View all access options for this article.
