Abstract
The proposed framework addresses the significant challenges in the detection and classification of osteosarcoma bone cancer, which traditional methods struggle to overcome, such as high variability in histopathological images, data imbalance, and reliance on extensive manual labeling. Traditional machine learning models like SVM, Random Forest, and Naive Bayes, along with deep learning architectures such as NASNet Large, EfficientNetB0, and XCEPTION, often suffer from limited generalizability, lower accuracy, and inefficiency in distinguishing between benign and malignant tissues. To mitigate these challenges, we introduce a novel integration of DenseNet121 and Auxiliary Conditional Generative Adversarial Networks (ACGAN). DenseNet121's dense block architecture is utilized for hierarchical multi-scale feature extraction, achieving superior performance metrics, including an accuracy of 97.2%, an F1-score of 97%, and a precision of 100%. ACGAN is trained exclusively on benign patches, allowing it to learn the underlying data distribution and generate synthetic benign-like patches. By comparing residual and discrimination losses between malignant patches and their generated counterparts, the framework effectively identifies anomalies with a defined threshold, addressing issues of mislabeling and ambiguous malignant patches. This approach significantly enhances the quality of the training dataset, reduces the impact of data imbalance, and improves overall classification accuracy. The framework demonstrates scalability and adaptability, making it suitable for diverse diagnostic environments. Additionally, the automation facilitated by ACGAN reduces reliance on manual labeling, ensuring robustness across varying resource constraints. The integration of multi-modal imaging data remains a promising area for future exploration. This innovative framework exemplifies the potential for advancing osteosarcoma diagnosis and broadens the scope of clinical and research applications through its advanced anomaly detection and classification capabilities.
Keywords
Get full access to this article
View all access options for this article.
