Contrastive self-supervised learning has recently shown promising progress in representation learning, such as SimCLR, wherein negative samples (negatives) play a significant role in learning complex semantic representation. However, increasing the number of negatives leads to a significant increase in computational resources. Meanwhile, only a few negatives–hard negatives–that are closer to positives are crucial for model training. Therefore, we propose Hard-Negatives Focused Strategy (HNFS) to improve the quality of samples by focusing on hard negatives in the training of SimCLR. Specifically, HNFS calculates the Impact of negatives by power function distribution, which dynamically assigns a higher weight to harder negatives and replaces the similarity matrix with the Impact matrix for contrastive learning. The non-linear incremental nature of power function distribution can widen the focus difference between hard negatives and other negatives, thus enhancing the model’s willingness to effectively learn semantic representations from hard negatives. We propose Hard-Negatives Focused SimCLR (HNF-SimCLR) for text classification and similar sentence pair tasks. Extensive experiments demonstrate that HNF-SimCLR outperforms all baselines across SST-2, ARSC, and QQP datasets (for example, Precision improvement from 0.42% to 4.35%). Meanwhile, in the ablation study we propose two linear weighting algorithms for comparison and suggest that HNFS facilitates more effective learning of hard negatives than both linear weighting algorithms, thus improving the model’s performance.