Abstract
Effective path planning is crucial for autonomous underwater vehicles (AUVs) in rescue and logistics operations. This paper proposes a Risk Aware Proximal Safe Deep Q-Network (RAPS-DQN) to address dynamic path planning challenges across complex underwater terrain. Traditional deep reinforcement learning (DRL) methods show limited generalization in high-dimensional environments, making the adaptive RAPS-DQN approach essential. The proposed method enhances the original DQN by incorporating Lyapunov stability criteria and selectively prioritizing significant transitions based on temporal-difference errors. The suggested approach develops paths that satisfy requirements both efficiently and steadily, as shown by real-time testing and comparison with traditional DRL techniques. The findings show RAPS-DQN facilitates effective learning in complex AUV navigation scenarios, with potential applications in underwater search and rescue missions in flooded urban environments and disaster-affected areas.
Get full access to this article
View all access options for this article.
