Abstract
Foreign object debris (FOD) detection is a critical task performed by airport inspection personnel to mitigate the risks posed by foreign objects on the airfield. With advancements in computing technology and hardware miniaturization, mobile camera-based systems comprising computer vision (CV) models and unmanned aerial systems have emerged as effective solutions for quickly detecting FOD items. To ensure accurate FOD detection, deep learning-based CV models require large amounts of training data that include diverse foreign objects. These models need to handle both in-distribution (ID) and out-of-distribution (OOD) samples because of the dynamic nature of the airport environment. Variations in airport surface materials, foreign object sizes, coloration, and brightness levels further complicate FOD detection. Existing approaches have certain limitations, including the closed-world assumption in CV models, manual data generation and annotation resulting in a limited amount of training data, and a lack of comprehensive evaluations meeting the Federal Aviation Administration (FAA) standards. To overcome these limitations, we present a novel mobile camera-based FOD detection framework. The framework integrates an open-world model that utilizes a state-of-the-art YOLOv7 object detector, effectively handling both ID and OOD samples. Extensive and diverse training data are generated through a three-phase data augmentation pipeline powered by a deep convolutional generative adversarial network model. This pipeline generates synthetic data with different imagery characteristics, representing both ID and OOD samples. Evaluations conducted according to FAA standards validate the proposed methodology, achieving an average precision rate exceeding 97% and an inference rate of 22 ms.
Keywords
Get full access to this article
View all access options for this article.
