Contribution evaluation is an important phase in fairness-aware federated learning which provides a significant basis for client selection and incentive distribution. However, most existing contribution evaluation schemes have been proposed without considering privacy protection, which will directly cause privacy attacks and affect the clients’ willingness to participate in a federated learning task. To address this issue, we present a privacy-preserving contribution evaluation scheme (PPCE) based on gradient Shapley, arithmetic sharing, shuffling, and asymmetric encryption for fairness-aware federated learning. To be specific, we leverage arithmetic sharing to achieve the reconstruction and utility evaluation of the sub-model which is needed in the gradient Shapley under the premise of privacy protection. Besides, we use shuffling and asymmetric encryption to ensure the privacy of test data which is collected from the participanting clients for the sake of fairness. We also analyze the privacy and security of PPCE. Finally, we prototype PPCE and estimate the performance using classical neural networks and real datasets. The results show that PPCE achieves high performance in terms of computational costs.