The increasing number of vehicles unsupported by the increase in lane capacity makes traffic congestion a phenomenon in large cities. Furthermore, complex connections of traffic lanes and an insufficient conventional traffic light management system increase the risk of congestion as the capacity of the traffic network fails to accommodate the incoming traffic flow and has inefficient coordination among the lanes. These issues require a better traffic control system for managing traffic flow at each intersection while keeping coordination between adjacent intersections. Hence, the whole traffic network can operate effectively. To address this problem, we propose a double-agent Q-learning algorithm for an intelligent traffic control system, specifically to handle two intersections. In the proposed method, we employ two Q-matrices as a double reinforcement learning (RL) agent. The action is selected using a coordinated epsilon greedy algorithm, while the state is defined as traffic levels converted from the number of vehicles present in each lane. The agents receive the same reward based on how successful they are at emptying the lanes at each intersection. The results show that regardless of initial traffic conditions, both agents successfully maintain the traffic flow.