Abstract
In recent years, machine learning especially deep models have made significant improvements over their performances and thus applicable to many problems that until a decade ago were prohibitively difficult to learn. One of the strengths of the deep models is that they adaptively capture well-structured representations of the input in their internal representations that help them to generate desirable outputs. However, while many studies are dedicated for improving the performances of neural networks, less efforts are focused for understanding the formation of the internal representations in hierarchical neural networks and the implications to their performances. Here, we study a network model that incorporates topographical self-organizing maps into a supervised network and show how gradient learning results in a form of a self-organizing learning rule. Topographical self-organizing principles as internal representation is interesting because while topographical self-organizing principles have motivated much of early learning models and relevant to biological learning systems, such principles have rarely been included in supervised learning architectures. In this paper our objectives are explaining the dynamics of the proposed model, visually comparing the internal representation of the proposed model against some deep models and importantly showing that our model is robust in the sense of its application to a variety of areas, which is believed to be a hallmark of biological learning systems.
Keywords
Get full access to this article
View all access options for this article.
