Abstract
The combination of Genetic Algorithms (GAs) and Artificial Neural Networks (ANNs) has already resulted in researchers advancing in quite a few real world applications but it is in control that this alliance yields such appreciable benefit.
The paper reports a Radial Basis Function (RBF) network training technique which joins together global strategy of GAs and a local adjusting procedure typical for RBF networks. While activation function window centres and widths are processed via a “slow” numeric GA, output‐layer neurone synaptic weights are defined by a “fast” analytical method.
The technique allows to minimize not only the network hidden-layer size but also the pattern set required for training the adequate dynamical object neuroemulator.
Get full access to this article
View all access options for this article.
