DPC (clustering by fast search and find of density peaks) algorithm is an ingenious and efficient clustering algorithm that can discover cluster centers via its very novelty decision graph and subsequently achieve the clustering of a dataset efficiently via its innovative one-step assignment strategy. However, DPC algorithm has its inborn shortcomings, such as its “Domino Effect” that once a point is assigned to an error cluster, then there will be many subsequent points being assigned to error clusters, resulting in a poor clustering. This shortcoming in part due to its one-step assignment, and in part due to its distance metric. DPC uses the Euclidean distance to calculate the distance between points. The Euclidean distance takes it as default that each feature does equal contribute to the distance between points, but in practice, each feature does not make equal contribute to the distance. To address this shortcoming, this paper proposes a standard deviation weighted distance instead of the Euclidean distance used in DPC algorithm. This innovative distance weights a feature using the standard deviation of the feature on all points from a dataset, so that the distance between points embodies the specific contribution of the feature to the distance. The very efficient one-step assignment strategy is inherited. Therefore, we developed the advanced PDC clustering algorithm which is referred to as SDW-DPC (Standard Deviation Weighted Distance based Density Peaks Clustering) algorithm. Extensive experiments on synthetic datasets and real-world datasets from UCI machine learning repository demonstrate that our SDW-DPC outperforms the original DPC, and other famous benchmark clustering algorithms including AP, DBSCAN and K-means in terms of clustering accuracy (Acc), adjusted mutual information (AMI), and adjusted rand index (ARI).