Abstract
We propose a novel neural network approach for the classification of abnormal mammographic images into benign or malignant based on their texture representations. The proposed framework has the capability of mapping high dimensional feature space into a lower-dimension, in a supervised way. The main contribution of the proposed classifier is to introduce a new neuron structure for map representation and adopt a supervised learning technique for feature classification. This is achieved by making the weight updating procedure dependent on the class reliability of the neuron. We showed high accuracy (95.2%) for our proposed approach in the classification of abnormal real mammographic images when compared to other related methods.
Introduction
Image feature classification presents a challenge in many computer vision applications, especially in health informatics. 1 With the enormous growth of medical image data that need to be analysed, image classification has become more demanding. A critical task to build a robust image analysis framework is to design an effective classification model that can cope with medical images efficiently, with the least memory requirements and user interaction.
A mammography is a digital image of the breast that is acquired by a low dose X-rays and used by radiologists to detect abnormal regions in the breast. Recently, much attention has been paid to the development of a robust tool for detecting abnormalities in digitized mammograms. A quantum-based approach 2 has been developed based on both quantum signal processing and cellular automata to detect microcalcifications in digitized mammograms. Fisher linear discriminant analysis 3 has been used by integrating features extracted based on neighbourhood structural similarity to distinguish mammographic masses into being benign or malignant. Likewise, several kinds of local features based on local binary pattern have been used to feed linear/nonlinear classifiers, 4 such as support vector, artificial neural network, and random forest, for the characterization of mammographic masses (as benign or malignant). Moreover, a deep learning approach based on convolutional neural network (CNN) has been previously applied to classify abnormalities, benign or malignant, in mammographic images. 5 The Self-Organizing Map (SOM)6-10 has been extensively used as classifier by projecting the patterns of a higher dimensional input space n into m code-book blocks of size n organized in a two-dimensional grid. SOM provides two fundamental issues: the first is the clustering of patterns and the second is the relationship between their associated clusters. For instance, clustering is a fully unsupervised learning procedure while the relationship between clusters can be visualized in the planar surface by checking the distances between the code-book blocks. Although it is difficult to deduce an exact relationship between those clusters (where the code-book block size is greater than the planar surface size), this gives us an insight about how to bridge the gap between clustering and classification.
In this paper, we describe a simple but effective artificial neural network method, inspired by the network that has been proposed and used in Wang et al. 8 The proposed neural network has been designed to integrate and take advantage of the probability of a particular neuron to be a winner by a voting criteria to classify mammographic images.
Mammographic Image Feature Representation
In this work, image feature representation consists of two main steps: image pre-processing and feature extraction. In the first step, to cope with illumination variations and improve the quality of the images, we equalized the histograms of images and applied a cropping operation to reduce the undesirable background. After this pre-processing step, a feature extraction step was applied to provide a set of robust and reliable features to train the proposed classifier (see next section for details). Among the 14 statistical/texture image features that have been previously proposed in Haralick et al, 11 we first extracted the most practically used features (eg, uniformity, entropy, dissimilarity, and contrast). Those features are used to build the feature space of the images to teach and test our learning framework. Moreover, to improve the robustness and localization property of the represented features, a block wise partitioning method 12 has been applied. In the following section, we discuss in detail the development of our approach to classify images based on their feature representation.
Proposed Classifier
Here, we discuss the novelty and contribution of the architecture structure of our proposed neural network for supervised learning (classification). Our proposed neural network model was inspired by the network in Wang et al. 8 We introduce a new neuron/node structure and a learning scheme that exploit the class label in the weight update step.
First, every neuron is represented with the set of connection weights
Figure 1 illustrates the learning process of framework based on the proposed neuron structure. Every neuron is represented with a set of connection weights, and a set of winning class counters, such that with maximum

Architecture of proposed framework: the proposed neuron structure and weight updating process.
Experimental Results
To quantitatively demonstrate the accuracy of the proposed framework, a 10-fold cross-validation scheme was used. We evaluated the performance of our method on the classification of cases from the Mammographic Image Analysis Society (MIAS) database (http://peipa.essex.ac.uk/info/mias.html). MIAS database consists of 330 images, which were categorized into 7 classes based on an abnormality criteria. Only 118 images were further categorized based on the severity of abnormality. These images were distributed between classes as follows: benign class (n = 64) and malignant class (n = 54). In this experiment, we demonstrated the performance of our method in classifying those abnormal cases into benign or malignant when compared to other methods.
The CNN-based model of Li et al 5 has achieved an accuracy of 89.05% in classifying abnormalities, benign or malignant, in MIAS. The best accuracy (92.10%) was obtained by the edge weighted local texture features 4 when used by different classifiers, such as artificial neural network, fisher linear discriminant analysis, support vector machine, and random forest. Likewise, the fisher linear-based model from Rabidas et al 3 has achieved an accuracy of 94.57% when neighbourhood structural similarity was used for the characterization of mammographic masses as benign or malignant. If we compare our model with the mentioned above models, we see that our model outperformed others with an accuracy of 95.2% (when 10-fold cross validation is used) in classifying abnormal cases into benign or malignant. Finally, we showed superior accuracy of our proposed classifier when compared to the dimension reduction and classification algorithim of Wang et al 8 (83.22%) using the same set of features for both methods.
Conclusions
This work presents a novel neural network-based classification model that is able to project high-dimensional input space into low-dimensional space of two, in a supervised fashion. This was achieved by introducing a new neuron structure and adopting the underlying learning procedure based on the class reliability of the neuron. Experimental results confirm the excellent performance of the proposed framework when applied to real mammographic images.
Footnotes
Funding:
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests:
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Author Contributions
MA shaped the research, developed the algorithm, and carried out the experiment. MM and MB provided critical feedback and helped shape the research. All authors wrote the manuscript.
