Abstract
This paper aims to overcome the limitations of the traditional optimal reciprocal collision avoidance (ORCA) algorithm—which depends on perfect environmental perception and lacks adaptability—by introducing an enhanced ORCA framework that incorporates reinforcement learning. First, the proposed framework replaces ORCA’s fixed responsibility allocation with a Q-learning mechanism that adaptively determines optimal responsibility weights, thereby improving the algorithm’s adaptability to diverse environments. Second, a probabilistic environmental model is developed to enhance the algorithm’s robustness under perceptual uncertainty. Finally, spatial partitioning is integrated with an efficient neighbor search strategy to improve computational efficiency while maintaining collision safety. Comparative experiments with existing methods verify the superior effectiveness and performance of the proposed algorithm.
Keywords
Get full access to this article
View all access options for this article.
