Abstract
A Deep Belief Network (DBN) is a generative probabilistic graphical model that contains many layers of hidden variables and has excelled among deep learning approaches. DBN can extract suitable features, but improving these networks for obtaining features with more discrimination ability is an important issue. One of the important improvements is sparsity in hidden units. In sparse representation, we have the property that learned features can be interpreted, i.e., correspond to meaningful aspects of input, and are more efficient. One of the main problems in sparsity techniques is to find the best hyper-parameters values which need dozens of experiments to obtain them. In this paper, a dynamic hyper-parameter value setting is proposed for resolving this problem. This proposed method does not need to set parameters manually. According to the results, our new dynamic method achieves acceptable recognition accuracy on test sets in different applications, including image, speech and text. According to these experiments, the proposed method can find hyper-parameters dynamically without losing much accuracy.
Keywords
Get full access to this article
View all access options for this article.
