Privacy preserving data analysis is currently one of the research hotspots in the field of information security. The objective of data analysis is to extract valuable information from data, while releasing data will cause privacy leaks. It seems there is a conflict between the two competing interests. However, in many practical scenarios, they must be balanced. For example, internet service providers, such as Google, Apple and Tencent, want to capture valuable information for business from the data they collected from their users. On the other hand, they must pay attention to users' privacy protection demands. One way to achieve the balance between privacy preserving and data analysis is to extra general characteristics of whole populations without disclosing the privacy information of individuals. Differential privacy, as a mechanism that provides rigorous and provable privacy guarantee, can achieve this. It has received great attention and extensive researches in both the privacy community and the data science community. This paper proposes a support vector machine model with differential privacy. The model uses the Laplace mechanism to add random noises to the classification hyperplane, so that the attacker cannot recover the training set through the model parameters. Experiments show that the model can achieve high classification accuracy while providing privacy protection