In many real-world settings, an agent needs to collaborate with other agents to learn the optimal strategy. Recently, the combination of graph attention mechanism and multiagent reinforcement learning algorithms has attracted widespread attention. However, many available pieces of information were ignored in previous efforts. In general, historical information locally stored by agents can be used to learn from past experiences, and feedback state information from environment also helps agents learn the optimal strategy more quickly. Therefore, we propose a new value decomposition with historical information graph attention networks (VHGN), which uses local historical information of agents as node features of graph and utilizes additional state information during learning process. Furthermore, it is theoretically proven that the proposed value decomposition equation satisfies the decomposable condition during centralized training. Finally, we evaluate the performance of our method on the StarCraft Multiagent Challenge benchmark (SMAC). The experimental results show that our method outperforms state-of-the-art value-based multiagent reinforcement learning algorithms and exhibits better convergence.