Abstract
In this article a new learning algorithm, i.e., the gradient range-based heuristic (GRBH) method for accelerating the convergence of neural networks, is presented based on the heuristic allocation of different learning rates to different gradient ranges. Numerical results for the two intertwined spirals problem are presented which clearly demonstrate the superiority of the new learning strategy over well-known algorithms such as back-propagation, steepest descent, and similar methods.
Get full access to this article
View all access options for this article.
