Robots have been increasingly applied in various fields with the development of technology and reinforcement learning interacts with the environment through policy models to achieve autonomous decision-making and effectively solve intelligent control problems for robots. However, robots still face significant challenges in completing tasks with sparse reward environments due to the difficulty in obtaining successful experiences after adopting reinforcement learning. Hindsight experience replay (HER) has been proven to be an effective method for dealing with sparse reward problems by setting virtual goals and learning equally from failed experiences. However, not all failed experiences have the same effect on the learning of real goals, and some virtual goals with low correlation with real goals may have a negative impact on the learning of real goals. To solve this problem, this paper proposes a reward mechanism named Correlation Based Hindsight Experience Replay (CBHER), which evaluates the correlation between the virtual target and the real target by calculating the angle and distance between them, and constructs a new reward mechanism based on correlation. This method ensures that virtual targets with higher correlation with the real target will receive higher reward values, thereby reducing the impact of virtual targets with lower correlation on the learning of the real target. Our method was validated in three simulated robot tasks to improve learning performance, and experimental results showed that our method is significantly superior to existing methods, with a training efficiency improvement of 17% compared to existing methods.