Abstract
Human-Robot Collaborative Assembly (HRCA) offers a novel solution to improve manual-based assembly manufacturing processes. However, the existing HRCA task planning approaches are limited in several aspects, such as inability to be applied effectively to mobile cobots with diverse workspaces and varying skill sets, as well as struggle in meeting the requirements of complex task structures and high-dimensional state spaces. In this paper, an interactive HRCA task planning framework based on finite Markov Decision Process (MDP) is presented, formalizing the Reinforcement Learning (RL) problem. To leverage the mobile and operational advantages of mobile cobots, a Multi-Attribute Hierarchical Task Network (MA-HTN) that enables efficient task decomposition and attribute representation is introduced. Additionally, considering state changes during assembly processes and utilizing Deep Reinforcement Learning (DRL) for handling high-dimensional decision problems, a universal DRL solving environment executed within unit time is constructed. This solving environment is based on a four-channel state diagram capable of reflecting high-dimensional state information that can be directly converted into digital tensor input for neural networks. Furthermore, to address frequent episode restarts in Deep Q-Network (DQN) algorithm and optimize task completion duration, a revival mechanism along with its enhanced algorithm are proposed. Finally, through an automobile fender bracket assembly scenario and an additional case study, the effectiveness of proposed method under varying numbers of tasks and work units is validated.
Keywords
Get full access to this article
View all access options for this article.
