In recent years, deep reinforcement learning (DRL) algorithms have made a lot of impressive progress in video games. The launch of Google Research Football (GRF) challenges these well-known DRL approaches in the territory of human-like sports games, which demands both the agent’s individual competence and cooperation strategies with sparse rewards provided. Most of the current solutions have huge requirements in computing resources for self-play or high-quality expert data for imitation learning. In this work, we design an efficient algorithm to avoid these issues. Our method takes full advantage of the built-in AI robots which have a human-level cooperative intention but lack dribble and shoot abilities. The former intention is challenging for DRL to learn, while the latter abilities are feasible for DRL. In this paper to overcome the challenges, we first adopt the attention mechanism to model the offensive and defensive intention of the built-in AI. Secondly, we reshape the sparse reward of GRF by adding an auxiliary reward signal generated from the attention network outputs. Finally, the attention reward instructs the agent to produce more cooperative strategies in the DRL training. Experiments show that our attention reward generator can accelerate the GRF agent training process and our method achieves better performance than the on-policy DRL baseline in the single-agent 11 vs. 11 football scenario.