Abstract
Model lightweight is a challenging topic in the field of computer vision, aimed at reducing the computational demands and size of models, with widespread research needs in both industrial and academic sectors. However, in fire detection tasks, fluctuating environmental factors often lead to decreased detection accuracy. Considering the challenges of feature extraction, low detection accuracy, and prolonged inference times in current fire detection networks, this paper proposes a YOLOv5s-RBC fire detection algorithm. Initially, a lightweight convolutional neural network module, RepVGG, is introduced during the feature extraction stage, which enhances deep feature extraction of input images while reducing the model's inference time. Subsequently, a weighted bidirectional feature pyramid network, BiFPN, serves as the feature fusion network for the YOLOv5 s model, merging features extracted from various dimensions in a layered approach, thus optimizing the feature fusion structure of the YOLOv5 s model. Lastly, the backbone network incorporates a spatial and channel attention mechanism, CBAM, enhancing focus on target area features and reducing the computational load of the network model. Experimental results indicate that, the YOLOv5s-RBC network model proposed in this paper detects fire images faster and with higher accuracy, meets the requirements for real-time detection, and to some extent reduces the computational load of the network model, thereby increasing inference speed and detection precision.
Get full access to this article
View all access options for this article.
