Recently, word embedding performs well in many natural language processing tasks, which one of is capturing syntactic and semantic word relationships. It can be used in many other research fields, such as recommendation and knowledge base. In this paper, we propose a method that uses the word embedding to capture the semantic similarities of entities. We divide the projection layer into two parts: one is entity, and the other is non-entity, and add the non-entities to the negative sampling of target entities. By iterating over each sentence, the entities are embedded in the entity vector space. For the experiments, we use two kinds of text corpus, the total words of each approximates 50 million to learn the entity vectors. Finally, the experiments show that our model is faster than Skip-gram model in training task and do the better in the calculation of semantic similarity entities.