Reinforcement Learning for Autonomous Driving with Latent State Inference and Spatial-Temporal Relationships
- Resource Type
- Conference
- Authors
- Ma, Xiaobai; Li, Jiachen; Kochenderfer, Mykel J.; Isele, David; Fujimura, Kikuo
- Source
- 2021 IEEE International Conference on Robotics and Automation (ICRA) Robotics and Automation (ICRA), 2021 IEEE International Conference on. :6064-6071 May, 2021
- Subject
- Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Training
Space vehicles
Couplings
Navigation
Supervised learning
Reinforcement learning
Graph neural networks
- Language
- ISSN
- 2577-087X
Deep reinforcement learning (DRL) provides a promising way for learning navigation in complex autonomous driving scenarios. However, identifying the subtle cues that can indicate drastically different outcomes remains an open problem with designing autonomous systems that operate in human environments. In this work, we show that explicitly inferring the latent state and encoding spatial-temporal relationships in a reinforcement learning framework can help address this difficulty. We encode prior knowledge on the latent states of other drivers through a framework that combines the reinforcement learner with a supervised learner. In addition, we model the influence passing between different vehicles through graph neural networks (GNNs). The proposed framework significantly improves performance in the context of navigating T-intersections compared with state-of-the-art baseline approaches.