Abstract
It has been proven that the dendritic lattice neural network (DLNN) has the advantages of fast calculation, nonexistent convergence problems, and a superior capacity to store information. However, several datasets have also shown that the DLNN still suffers from low classification accuracy problems. This paper proposes that the main reason behind this problem is that the original DLNN cannot classify the samples that fall outside of all the hyperboxes. In order to solve this problem, a fuzzy inclusion measure is introduced to improve DLNN model’s testing algorithm. The improved testing algorithm of the DLNN model consists of two parts: (1) the classification of samples covered by a hyperbox with the DLNN model, and (2) the classification of samples outside all of the hyperboxes based on the principle of maximum membership degree. Throughout this study, four standard datasets were employed to evaluate the effectiveness of the improved DLNN (based on comparisons with the original DLNN). Experimental results show that, in both the training and testing samples, the improved DLNN is capable of higher classification accuracies than the original DLNN.
Get full access to this article
View all access options for this article.
