Abstract
Reusing knowledge obtained in other related but different tasks to accelerate the learning procedure of reinforcement learning (RL) has attracted more and more attention and expert knowledge transfer is the root cause of positive effect. Nevertheless, compared with acquiring knowledge by RL training in source tasks, this paper proposes to transfer knowledge contained in human-demonstrations of source tasks. Based on this, three specific forms of knowledge in total are mined from demonstration trajectories to be reused in the target task to shape RL and all of them are closely associated with the similarity between states of different tasks which can be measured by Euclidean distance via human-supplied inter-task mappings. In more detail, the similarity between the target state and the most similar state in source samples, the proportion of different actions among the
Get full access to this article
View all access options for this article.
