An improved Q-learning algorithm for an autonomous mobile robot navigation problem
- Resource Type
- Conference
- Authors
- Muhammad, Jawad; Bucak, Ihsan Omur
- Source
- 2013 The International Conference on Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE) Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2013 International Conference on. :239-243 May, 2013
- Subject
- Computing and Processing
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Fields, Waves and Electromagnetics
Robots
Navigation
Trajectory
Indexes
Adaptation models
Reinforcement learning
Q-learning
Mobile Robot Navigation
Robot Control
- Language
This work applies the popular reinforcement learning methodology of Q learning in a typical robot control navigation problem. It is a two dimensional (2D) set-up where a robot tries to learn its path through its environment by avoiding any obstacles that may be encountered on its way from its home to a final destination (a goal state). During the navigation, trajectory of all the state-action pairs is stored and is replayed in a backward direction to propagate the refined Q values from any state to a goal state. This effort greatly reduces the convergence rate for the Q-table as the results obtained from the simulations indicate an excellent level of performance once compared with the traditional Q-learning.