Abstract
Foreign Object Debris (FOD) detection is a critical task for ensuring aviation safety, especially on airport runways where small, diverse, and often occluded objects can pose serious threats to the operating aircraft. Existing deep learning methods often struggle with balancing detection accuracy and computational efficiency in such challenging environments. To address this, we propose LiteFODNet, a lightweight and data-efficient deep learning framework tailored for intelligent FOD detection in surveillance imagery. LiteFODNet consists of the following four novel architectural modules; (i) Compact Multi-Scale Pooling (CMSP) integrates atrous convolutions with global context aggregation for fine-grained multi-scale features, (ii) Spatial-Channel Reducer (SCR) uses depthwise separable filtering for efficient spatial downsampling, (iii) Feature Focus Module (FFM) combines global pooling and dual-stage calibration for dynamic channel emphasis, and (iv) Split Path Attention (SPA) independently learns axis-aligned attention for better spatial localization. Together, these components enhance the model’s ability to generalize across small, complex objects while reducing computational burden. Evaluated on three benchmark FOD datasets (FOD-Tiny, FOD-A multiclass, and FOD-A single-class), LiteFODNet achieves 0.8888% higher mAP@50–95 than YOLOv8n while reducing parameters by 16.39%, inference time by 27.77%, and GFLOPs by 3.66%. These results demonstrate that LiteFODNet offers an intelligent, high-performance solution for real-time FOD detection under constrained resources, with strong potential for deployment in aviation safety monitoring systems.
Keywords
Get full access to this article
View all access options for this article.
