Abstract
The preservation of commercial value and economic viability is dependent on the proper handling of a mango's post-harvest quality. Vision-based automated defect classification is scalable compared to manual inspection. But previous approaches have had issues with generalizability, interpretability, and using stand alone CNN models. This research tackles these issues with a jointly-distributed, mixed explainability deep learning model, built using complementary deep features of ResNet50 and MobileNetV2, which employs PCA followed by a Gradient Boosting Classifier (GBC) to classify features. The model was tested on a collection of 1200 images of mangoes collected from four characteristics (healthy, black-spotted, shriveled, infected) from various tiered mixed-light orchards and under a range of lighting. The tested system fusion-based architecture enhances performance further with 95.7% accuracy with single model baselines and CNN + SVM combinations with 3–5% gap and overall enhancement of precision, recall, F1 score and MCC. Further analyses using Grad-CAM helped to center and depict the specific areas with the defects. This research clearly demonstrates the power of the feature fusion approach coupled with the ensemble learning and the explainable AI techniques in building the working system for the automated quality assessment of the fruit.
Keywords
Get full access to this article
View all access options for this article.
