Abstract
Robust brain extraction is a critical first step for quantitative neuroimaging, yet no dedicated method currently exists for porcine magnetic resonance imaging despite its importance as a translational neuroscientific animal model, particularly gyrencephalic traumatic brain injury (TBI). Porcine anatomy presents distinct anatomical challenges, with extensive extracranial fat and complex textures around the olfactory bulbs that limit the performance of existing human-based tools. We present PIGSKIN, a deep learning framework trained primarily on synthetic data generated from a small set of expert annotations. Unlike conventional approaches, PIGSKIN models brain and non-brain regions separately, applying distinct clustering and transformation parameters to each. This strategy constrains variability in brain anatomy while allowing greater diversity in extracranial tissues, ensuring anatomically consistent label maps in which the brain remains embedded within its surroundings. Additional transformations introduce further spatial and intensity variability, producing a diverse set of synthetic training pairs. During inference, PIGSKIN operates in a single step at native resolution within a standardized cube, preserving fine-scale anatomical detail and supporting generalization across cohorts differing in breed, acquisition protocol, and injury model. The model achieved performance comparable to expert consistency (Dice ≈ 0.97), approaching reported inter-rater reliability. Finally, we show that incorporating co-registered T1- and T2-weighted inputs significantly outperform single-modality training, underscoring the value of multimodal integration for synthetic data generation. Together, these results establish PIGSKIN as the first systematically validated solution for porcine brain extraction and a framework adaptable to other large animal brain extraction tasks.
Get full access to this article
View all access options for this article.
