Abstract
Autonomous navigation of mobile robots in unstructured, dynamic environments is a critical challenge in robotics. Deep Reinforcement Learning (DRL) has emerged as a promising approach for enabling robots to learn complex navigation policies through continuous interaction with their surroundings. This review paper presents a value-based algorithms, policy-based algorithms, hybrid DRL algorithms, hierarchical DRL algorithms, and multi-agent DRL (MADRL), with a focus on their applicability to real-world robotic navigation tasks. Despite significant advancements, several challenges remain such as sample inefficiency, safety constraints, and limited generalization to unseen environments. This review further highlights open research issues and future directions, including sim-to-real transfer, multi-agent collaboration, and hybrid approaches that integrate DRL with classical navigation methods.
Keywords
Get full access to this article
View all access options for this article.
