In autonomous vehicle platooning, Vehicle-to-Everything (V2X) communications are leveraged in cooperative adaptive cruise control (CACC) to improve control performance. Since exchanging information at all times incurs significant communication overhead in vehicular networks, it is important to determine when V2X communication is necessary. To solve this problem, we propose a Deep Reinforcement Learning (DRL)-based algorithm named Attention-DDPG, which learns platoon control with Deep Deterministic Policy Gradient (DDPG), and learns when to communicate with an attention network. Specifically, each preceding vehicle is equipped with a deep neural network (DNN), which takes as input its local state and platoon control action and determines whether to transmit its acceleration or not to the following vehicle at each time step. The attention network of a preceding vehicle is trained using the feedback from the following vehicle on the value of V2X information in the form of an advantage function. In order to evaluate Attention-DDPG, simulations are performed using real driving data, and performance is compared with those of two baselines that communicate and do not communicate at all times, respectively. The results demonstrate that Attention-DDPG strikes a competitive tradeoff between control performance and communication overhead while ensuring platoon string stability.