In contrastive self-supervised learning, such as Sim-CLR and MOCO, negative samples play an important role on the model's robustness. Instead of increasing the negatives' number, we promote the in-batch negatives' quality by adjusting their embedding. In-Batch Negatives' Enhanced Self-Supervised Learning (IBN-SSL) focus on negatives' quality by an importance-weighted algorithm and an online boundary. The importance-weighted negatives' quality algorithm decreases the negatives' relative locations from positive in projection space, which pushes the model to learn to distinguish positives from harder negatives. And the online boundary compresses negatives' vector space and keep negatives farther from positive. A better representation model can be leaned by IBN-SSL, and experiments show that it outperforms both RoBERTa and SimCLR in text classification task and similar sentence pair task.