Abstract
Deep learning-based methods have become the dominant approach for low-dose CT (LDCT) denoising. However, their performance often degrades on cross-domain datasets due to domain gaps, highlighting the need for effective domain adaptation techniques. While domain adaptation methods based on the pretraining and fine-tuning paradigm show great potential, they typically require additional labeled data from the target domain, which limits their practicality. Therefore, this work aims to develop a self-supervised fine-tuning method for LDCT denoising. In our work, we propose to fine-tune pretrained models using self-supervised loss based on pixel shuffle image preprocessing. Additionally, we design a two-stage fine-tuning strategy to mitigate the input misalignment between the pretraining and fine-tuning stages. Furthermore, to effectively capture prior knowledge from the source domain, we design a dual-scale SwinIR model as the pretrained backbone. We evaluate our method on two public datasets, and the results demonstrate that it bridges the domain gap without requiring target-domain labels, achieving effective denoising performance and strong cross-domain generalization. Code and model for our proposed approach are publicly available at https://github.com/Wasserdawn/TSFDAN.
Get full access to this article
View all access options for this article.
