An Inverse Reinforcement Learning Approach for Customizing Automated Lane Change Systems


Vehicle automation seeks to enhance road safety and improve the driving experience. However, a standard system does not account for variations in users and driving conditions. Customizing vehicle automation based on users’ preferences aims to improve the user experience and adoption of the technologies. This study introduces a systematic paradigm that starts with naturalistic driving data to identify the driving behaviors and styles for a customized automated lane change system. The driving behaviors are first extracted using Multivariate Functional Principal Component Analysis (MFPCA) with minimum prior expert knowledge. The driving styles are identified by clustering the extracted driving behaviors. An Inverse Reinforcement Learning (IRL) algorithm is then used to train the automated lane change system from grouped demonstrations of the identified driving styles to capture the preferences of a group of drivers with a similar driving style. The performance of the proposed customized automated lane change system is compared to (1) a non-customized system trained on all the sample trips, (2) customized systems built on expert-coded reward functions, and (3) customized systems trained using a Generative Adversarial Imitation Learning (GAIL) algorithm. The results show that our method outperforms all the other systems with respect to the prediction accuracy of the lane change actions. Additionally, our method gains insights on the representative behaviors of different driving styles to enable customization of automated lane change systems.

IEEE Transactions on Vehicular Technology 71.9 (2022)