Abstract
An analogy between a genetic algorithm based pattern classification scheme (where hyperplanes are used to approximate the class boundaries through searching) and multilayer perceptron (MLP) based classifier is established. Based on this, a method for determining the MLP architecture automatically is described. It is shown that the architecture would need atmost two hidden layers, the neurons of which are responsible for generating hyperplanes and regions. The neurons in the second hidden and output layers perform the AND & OR functions respectively. The methodology also includes a post processing step which automatically removes any redundant neuron in the hidden/output layer. An extensive comparative study of the performance of the MLP, thus derived using the proposed method, with those of several other conventional MLPs is presented for different data sets.
Keywords
Get full access to this article
View all access options for this article.
