Abstract
Background:
Deep learning advances medical imaging segmentation, but the insufficient diversity of datasets limits its performance. The AMOS22 dataset addresses this by providing large-scale, varied clinical data to enhance algorithm robustness.
Purpose
This study develops and validates CADSTransN-Net (Convolutional Attention and Deep Supervision TransN-Net) to optimize abdominal organ segmentation for the AMOS22 challenge.
Methods
CADSTransN-Net integrates three core innovations: a novel N-shaped feature flow path (departing from symmetric architectures for efficient encoder-decoder fusion), a convolutional attention mechanism (prioritizing anatomically relevant regions), and layer-wise deep supervision (ensuring meticulous gradient propagation and faster convergence).
Results
Evaluated on the full AMOS22 dataset, CADSTransN-Net achieved outstanding comprehensive performance: average Dice Similarity Coefficient (DSC) of 0.907, Normalized Surface Dice (NSD) of 0.850, 95th Percentile Hausdorff Distance (HD(95%)) of 3.98 mm, Average Surface Distance (ASD) of 0.75 mm, Absolute Volumetric Difference (AVD) of 39,755.88 mm3, and Relative Volumetric Difference (RVD) of 1.53%. These metrics confirm its high accuracy in region overlap, boundary consistency, and volume estimation for multi-modal abdominal multi-organ segmentation.
Conclusions
CADSTransN-Net effectively meets AMOS22's challenges, delivering robust performance across region, boundary, and volume metrics. It provides a reliable solution for multi-modal abdominal multi-organ segmentation, with significant clinical potential for tasks such as surgical navigation.
Get full access to this article
View all access options for this article.
