Abstract
Transfer learning empowers machine learning algorithms the ability to train a model on a given task, capture the existing relationship in the data and reuse it for another task in the same or similar domain. In this paper, we present a Multi-view deep unsupervised transfer leaning via joint auto-encoder coupled with dictionary learning (MVT-DAE). In the proposed approach knowledge transfer is done in two stages. First, we perform multi-view dictionary learning based on low-rank tensor regularization in the source domain to learn the common intrinsic relationship among views. Then we transfer the learned dictionaries to the target domain, and we construct a new representation of both domain via sparse coding. In the second stage, we use two deep auto-encoders (DAE) from both sources to perform parameters transfer. Each DAE consist of an embedding layer and a label layer. The embedding layer reconstructs the input while the label layer is used to encode the label information. To relax the distribution distance of the embedding and label layer between the source and the target domain a joint maximum mean discrepancy (JMMD) is employed. Learning can be done via Stochastic gradient descent by sharing the embedding and label layer weights with the target domain. We conduct extensive experiments in two real-world datasets which demonstrated the effectiveness of our approach compared with the different state of the art baseline.
Get full access to this article
View all access options for this article.
