Abstract
Increasing number of automated vehicles will share the road with human-driven vehicles for a long time in the near future. How to naturally drive on the road and obtain the human acceptance still remains a challenge for automated vehicles. Decision making as the brain of automated vehicles is responsible for providing human-like driving decisions considering driving style characteristics of passengers. However, there are a variety of human driving styles and hard to describe using a general form. To solve above issues, this paper proposes an effective human-like decision-making method considering driving style characteristics of risk perception and driving preference. Firstly, learning from human demonstration, a human-like decision-making reward function is generated using the inverse reinforcement learning (IRL) method. Secondly, to balance the class samples, a specific sample data process approach is proposed to obtain group reward functions with a distinctive driving style. Moreover, the diversity of driving style as the driving style guidance is adopted to rebuild the learning process of IRL. Finally, the proposed method is verified with both a simulation and a hardware-in-loop test environment. Results show that the human-likeness of the HIL driving decision is 1.01 m and the average online computing time is 0.99 s.
Keywords
Get full access to this article
View all access options for this article.
