Abstract
The safe and efficient navigation of mobile robots in the presence of unknown dynamic obstacles remains a complex and unresolved challenge. This paper presents collision-free path planning for a mobile robot that safely deals with multi-directional obstacles, that is, randomly moving dynamic obstacles, using a Deep Reinforcement Learning (DRL) algorithm named Deep Q-Network (DQN) with inflated robot reward functions. The robot moves in a time-efficient and collision-free route while maintaining a safe distance with both static and unpredictable dynamic obstacles. The modified DQN algorithm takes RGB images of the environment as input for training a Convolution Neural Network (CNN) and provides a safe and short path for navigation. The robot used for training is an omni-wheeled mobile robot exploring outdoors, that is, concourse environment, and indoors, that is, home environment. The Closed-Loop Inverse Kinematics (CLIK) algorithm is employed to control a mobile robot to follow the desired path. The simulation results indicate that the proposed algorithm with inflated robot reward functions demonstrates remarkable performance as compared to recently used Reinforcement Learning (RL) algorithms when dealing with both stationary and randomly moving obstacles in the given environment.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
