Cooperative hunting is a typical and significant scene to study multi-agent behaviors, where conventional control strategies are difficult to cope with, due to its high dimensionality of state space and locality of communication. Reinforcement learning provides a framework and a set of tools for this issue by trial-and-error interactions with the environment. Though promising, it often requires a large number of empirical sample data to learn effective hunting strategies, leading to low sample efficiency, understood as the training episodes required for the agent to learn effective behavior strategies. To improve the sampling efficiency, we propose a data enhancement strategy integrated in the execution (CTDE) training framework to train the multi-agent system. The data enhancement strategy is based on a state transfer dynamics model to generate additional predicted data, which we called dynamic prediction model, combined with the empirical data by interacting with the environment, for higher sample efficiency. The simulation results on the Webots platform show that our method outperforms some state-of-the-art methods, such as MAPPO, with high data sample efficiency.